Medical parameter processing system
A processing system for patient medical parameters includes a communication interface for acquiring patient parameter data comprising a medically significant signal from a patient monitoring device attached to a patient. A transform processor converts the medically significant signal into a plurality of components using a transform. A filter filters the components to exclude components based on criteria to provide filtered components. An inverse transform processor inverse-transforms the filtered components to provide a representation of the medically significant signal.
This application claims priority based on provisional patent application Ser. No. 60/520,016 which was filed on November 14, 2003.
FIELD OF THE INVENTIONThe present invention relates generally to the field of biomedical data processing, and more specifically to a method of storing and retrieving clinical telemetry data.
BACKGROUND OF THE INVENTIONAs medical equipment becomes more sophisticated and allows seamless connectivity from the point of care to enterprise information management systems, more patient data collected at the point of care becomes available for remote viewing and analysis within the electronic medical record. Such systems may provide at least a subset of the valuable clinical information gathered at the point of care to anywhere within the enterprise, on site and off site, with viewing capability available, even at a clinician's home.
In communicating raw clinical data from departmental to enterprise information systems within a healthcare enterprise, care is required regarding the amount of data that is acquired and saved. Too little data results in possibly significant physiological events not being in the acquired and saved data. Too much data overloads the system in terms of data throughput over the network and the required amount of data storage. In addition, too much data also may impair a physician's ability to effectively analyze such a large amount of data for the possibly relatively rare physiological events embedded within that large amount of data. Ultimately, the goal is to help the patient. Any method that does not aid in clinical diagnosis or treatment may even impede clinical progress, which would be detrimental to care.
In a typical critical care medical facility, a raw clinical data stream, having a substantial amount of data is continuously generated by bedside patient monitors. Telemetry data obtained from the patient monitors is normally not stored in its entirety. Instead, in standard practice, a flow sheet or assessment sheet is maintained in which a clinician, such as an anesthesiologist, nurse, respiratory therapist, etc., enters, at regular time intervals, important patient status information, e.g. including the heart rate, respiratory rate, and many other key parameters. Thus, these parameters are monitored continuously but recorded at discrete time intervals in the clinical record. In the least sophisticated systems, this information is recorded by hand using a paper flow sheet or assessment sheet at the patient's bedside. The assessment sheet remains with the patient throughout the stay at the hospital. Once completed, the paper record becomes a part of the patient's permanent record.
As information technology progresses, much of the paper based record keeping system is being supplanted by an electronic record, in which clinicians record information from patient monitoring systems, either manually or automatically, in electronic form. The data in this electronic form of assessment sheet is maintained in a central location in the hospital. The electronic assessment sheet remains associated with the patient throughout the phases of diagnosis and treatment in the hospital. The types of information which can be recorded, transmitted, and viewed within a typical health information system include administrative and demographic information as well as clinical results and telemetry data as would be entered in a paper assessment sheet.
A benefit of an electronic system is that, unlike the paper record, the electronic medical record may be accessed from many different locations without physically retrieving the patient's hardcopy information from a particular department. Loss of information is unlikely, and the use of electronic medical records establishes a standard approach for recording patient information. Each department conforms to specific standards in terms of the types and quality of information being recorded for each patient. Browser based technology derived from world wide web applications further simplifies medical record inspection, in terms of both viewing convenience and reducing the delays associated with retrieving the paper record. The net result is that clinicians can readily obtain patient information when required and where required. In addition, two way communication between the health information system and clinical systems enables the relatively error free retrieval of other patient information such as their medical record number and insurance information without adding further delay or introducing errors into the patient's record within the departmental system.
Bedside monitors typically can generate detailed information at intervals down to fractions of a second. In typical systems, a portion of the generated data is recorded, i.e. most of this information is discarded. More specifically, this data is recorded at predetermined time intervals (e.g. 15 minute intervals), in a manner similar to paper assessment sheets. An inherent compromise exists in terms of the size of the interval and the capturing of relatively important data from the bedside monitors. If the recording interval is too large, events of relatively short duration but high importance such as heart rate spikes or respiratory rate increases may be missed and not recorded within the electronic medical record. On the other hand, if the recording interval is too small the throughput of data across the hospital computing network and the storage requirements in a mass storage device of the medical record increase. The medical record becomes cumbersome and filled with much useless information, possibly even rendering the system unusable.
For example, during the course of a single hour, the total quantity of individually unique results can easily exceed five thousand values. Over the course of an eight hour shift, this can grow to nearly forty thousand values for a single patient and single physiological parameter. Multiplying these numbers by thirty or more patients and more than one physiological parameter per patient makes clear that the data quantity problem can become unmanageable. Automatic data compression is used to assist in the storage of such data within an associated long term archive. However, with the integration of clinical systems and a health information system via a long term clinical record, it is necessary to strike a balance between a system wide data deficit and a system wide data surplus.
Even in an inefficient health information system, the intention of the administrators and users is not to store a complete record of the patient telemetry data. One of the clinician's roles is to identity unimportant information and to record information which is deemed important for the clinical record of that patient. This goal, however, is impractical in a real world clinical environment. Clinicians are frequently moving from patient to patient with their primary focus being on patient care and not on dedicating their time and energy to full time data identification and collection. Clinical information tools available today do provide for filtering of repeated information, reducing the chance that redundant results are continually sent to the long term clinical record. Unfortunately, data that does not repeat or otherwise have non-repeating variations are not accommodated by this filtering approach. Filtering approaches use a simple comparison with a previous data value to determine whether the newer value should be excluded from transmission to the long term record. The simple filtering method treats data that is noisy or rarely repeats as ‘important’ and in such a case unimportant data points are transmitted to the long term record.
Results stored in the assessment sheet for the purpose of clinical reporting are sometimes inadequate as well, as can be appreciated by reference to
Overlaid on the curve 1 are data points 2, 8, 9, 10 and 11 of the assessment sheet curve 12. These data points reflect those values that normally are recorded using a fifteen minute update time interval of the clinical assessment sheet, as would be typical in a surgical intensive care environment. Curve 12 is a plot through the data points 2, 8, 9, 10 and 11. As seen in
A typical data management approach to the foregoing observations is that by maintaining a running record of the minimum, the maximum, and the mean signal value, the necessary information can be provided. However, this additional signal value information, while providing more insight into the range of the values over the course of the measurement period, does not provide insight into the behavior and trend of the raw signal data over time. The average value 5, together with the signal minimum 3 and maximum signal 4, add an additional three data points to the assessment sheet recording. If the exact time 6 of the minimum rate 3 and the exact time 7 of the maximum rate 4 are added, an additional two data points become part of the record for this patient. However, variations in signal behavior, and short term responses to stimuli such as drug interactions are still missing from the assessment sheet, even with this additional information.
The depiction of raw results shown in the curve 1 is representative of approximately 280 data points. In order for the collected telemetry data to be of value, a system is needed that may utilize this data without requiring an automated system to store, and an end user to view, the complete record of the raw information that has been collected for an individual patient, while still providing some insight into the character and trend of the original signal data.
BRIEF SUMMARY OF THE INVENTIONIn accordance with principles of the present invention, a processing system for patient medical parameters includes a communication interface for acquiring patient parameter data comprising a medically significant signal from a patient monitoring device attached to a patient. A transform processor converts the medically significant signal into a plurality of components using a transform. A filter filters the components to exclude components based on criteria to provide filtered components. An inverse transform processor inverse-transforms the filtered components to provide a representation of the medically significant signal.
BRIEF DESCRIPTION OF THE DRAWINGIn the drawing:
The present invention utilizes a discrete wavelet transform to provide a system for reconstructing or approximating a data signal or function. The disclosed wavelet transform provides time and magnitude localization of data signal specifics. More specifically, a data signal is partitioned into successive blocks containing a predetermined number of signal samples. Within each block, the data signal is encoded into coefficients representing magnitude details at differing time resolutions. This provides advantages (to be described below) when reconstructing time varying, nonstationary processes typically occurring in biomedical telemetry data. In the illustrated embodiment, the discrete wavelet transform calculation is conducted with respect to a Haar basis function, in which individual averages and differences, or details, are computed with respect to the raw signal data. One skilled in the art understands that other basis functions may also be used to perform the wavelet transformation, understands the advantages and limitations of the different basis functions, and understands how to select an appropriate basis function for a particular application.
For example, a small sample signal vector fT of raw data collected from a patient may take the form set forth in Equation 1:
fT=[5 −2 3 ] [1]
The process of computing wavelet coefficients from the vector fT described by Equation 1 is understood by referring to
The computation illustrated in
The relationship between the wavelet coefficients and the raw signal data may be expressed in matrix form:
f=H4b [5]
where H4 represents a 4×4 Haar matrix having the form:
Given available raw signal data f, the wavelet coefficients b may be found directly as follows:
b=H4−1 [7]
The Haar matrix H4 may also be inverted using standard methods.
The creation of the Haar matrix follows a predictable pattern as the number of rows and columns increases. However, by applying the Haar transform, the size of the matrix increases exponentially according to a scale of p=2n, where n is a positive integer. Thus, in the Haar basis function, the quantity of data conforms to this scale as well. The 8×8 Haar basis function H8 basis is:
The number of rows and columns contained within a Haar Hp basis follows in accord with the value of 2n. The H8 basis matrix transforms sample vectors f having 8 samples to transform vectors b having 8 coefficients.
Referring also to
fT=[5 −2 3 1 7 9 −3 −5] [9]
As seen in
As best seen in
bT=[1.875 −0.125 −0.25 6 3.5 1 −1 1] [10]
As may be seen from equations [5] and [7] above, given a complete set of wavelet coefficients b, a signal f may be reconstructed with no loss. One feature of the wavelet coefficients is that they establish the relative scale of the absolute differences with respect to the overall raw signal average. That is, the difference terms di provide an indication of a deviation of a signal from the ensemble average value si at respective levels of coarseness or fineness. In terms of reproducing the raw signal 24, the values of the wavelet coefficients establish their relative impact on the overall signal: the smaller the coefficient, the lower the impact on the overall signal. Thus, compression of the original signal may be achieved, with some degree of loss, by discarding a subset of these coefficients based on the establishment of a sensitivity threshold (described in more detail below).
Discarding potentially important information from the raw signal can be detrimental and provides a clinician with incomplete patient data. However, wavelet transformation provides the capability to automatically record the complete data signal while filtering out relatively unimportant details. This ensures that communication of the important data elements between the patient monitoring and treatment devices in the clinical environment and the storage devices in the health enterprise does not overwhelm the system. Further, any amount of the data, from a complete ensemble to that representing detailed temporal changes, may be inspected as desired by clinicians and researchers without requiring that the complete record of data be retrieved from the data repository in a single request.
The magnitude of the wavelet coefficients provides insight into the level of contribution they make to the character of the overall raw signal 24. Hence, by omitting certain coefficients, or substituting zero coefficients for them, it becomes possible to exclude noise, artifact, or other components that are judged to be of minor influence to the overall raw data sample. As an example of this capability, Table 1 presents an array of representative wavelet coefficients. The first column is the independent variable: time. Subsequent sets of columns define corresponding sets of Haar basis wavelet coefficients and the resulting signal values. The second set of columns represent wavelet coefficients to which no threshold is applied; the third set of columns represent wavelet coefficients to which a 10% threshold is applied; the fourth set of columns represent wavelet coefficients to which a 20% threshold is applied; and the fifth set of columns represent wavelet coefficients to which a 30% threshold is applied.
The absolute threshold values are computed by multiplying the selected threshold percentage by the largest wavelet coefficient present. For example, a 10% threshold is calculated by multiplying the largest coefficient magnitude value of −4 (occurring at time 8.0) by 0.1 (i.e. 10%) to obtain an absolute threshold value of 0.4. That is, the constraint imposed by the 10% threshold is that the absolute value of allowed wavelet coefficients be greater than 0.4. For the case of a 10% threshold, one wavelet coefficient (−0.250 occurring at time 3.0) is discarded. The discarded coefficient is set to zero so that its contribution is ignored for the purposes of signal reconstruction. The resulting wavelet coefficients are listed in the first column in the ‘10% Threshold’ set of columns. A signal, reconstructed from these wavelet coefficients, is listed in the second column in the ‘10% Threshold’ set of columns, and the error between the reconstructed signal and the actual signal (from the second column in the ‘No Threshold’ set of columns) is listed in the third column in the ‘10% Threshold’ set of
At the 20% Threshold level, the absolute threshold value is 0.2 multiplied by −4, yielding an absolute threshold of 0.8. In this case, too, the coefficient occurring at time 3.0 is discarded, i.e. is set to zero. Thus, the reconstructed signal and errors for the 20% Threshold level are the same as for the 10% Threshold level. The differences between the complete ensemble of wavelet coefficients and the threshold filtered wavelet coefficients result in a maximum error or deviation of 0.250 between the reconstructed signal and the original signal.
At the 30% threshold level, the absolute threshold value is 0.3 multiplied by −4, yielding an absolute threshold of 1.2. In this case, three coefficients (values −0.250, 1.000 and −1.000 at times 3.0, 6.0 and 7.0 respectively) are discarded, i.e. set to zero. The filtered wavelet coefficients are listed in the first column in the ‘30% Threshold’ set of columns. A signal reconstructed from the filtered wavelet coefficients is listed in the second column of the ‘30% Threshold’ set of columns, and the difference between the reconstructed signal and the actual signal is listed in the third column in the ‘30% Threshold’ set of columns. In this case, the error or deviation between the original signal and the reconstructed signal is no greater than 1.25.
The general impact of discarding coefficients from the wavelet coefficient vector is to reproduce a signal which is an approximation of the original signal. The higher the threshold, the higher the error between the reconstructed signal and the actual signal. Conversely, as the threshold for discarding coefficients approaches zero, the difference between the reconstructed signal and the original signal approaches zero.
Depending on the shape, repetitiveness, noise content and other behavior of the original signal, the degree of loss that results from discarding wavelet coefficients may 10 or may not be acceptable to the end user. However, for most telemetry applications, there is not a significant difference between the lossy, high threshold, cases and the lossless, no threshold, case.
In the case of a predictable or repetitive signal, the discarding of wavelet coefficients can have a trivial effect on the reconstruction of the original signal. This 15 latter case can be illustrated effectively with the aid of a different type of raw signal data characteristic. The data is presented in Table 2, and plotted in
The signal presented in Table 2 and illustrated in
This characteristic is another benefit of the discrete wavelet transform, namely, that the wavelet transformation process itself automatically produces detail coefficients which are zero in locations where no detail exists in the original signal, for example, during periods when the signal has a constant value. Stated differently, the discrete wavelet transform provides a means for representing the original signal with fewer coefficients.
In a clinical context, one measure of the degree of acceptability of a filtered wavelet transformed signal is how accurately the reconstructed signal represents the assessment sheet data. In a typical acute care environment, the assessment data would be the results retained within the long term medical record. Therefore, if the automated signal sampling and data storage reduction features of wavelet transforms can at least convey the assessment sheet values, then the present automated system is providing no less data to the long term record than is already available. However, any additional signal information that is provided can be regarded as beneficial for describing the overall characteristics of the raw signal data. As described above, to provide lossless reproduction, the total number of wavelet coefficients is equal to the total number of data points contained within the original signal. The benefit afforded by the wavelet transform is that it provides a means of determining the relative contribution of each raw signal data point. By excluding certain of these coefficients the original signal may be satisfactorily approximated.
Referring to
bthresh=K×bimax [11]
where bimax is the largest wavelet coefficient within the total set of coefficients under consideration, and K is the percentage level specified in fractional form (for instance: 4% implies K=0.04). When K=0, the complete set the wavelet coefficients are included in the signal reconstruction calculation.
The effect of this compression of signal 1 is a relatively small loss in the accuracy of the data that is normally stored in the long term clinical record. However, some insight has been gained into the character of the original signal 1. For example, the maximum signal value 35 appealing in the reconstructed signal 34 is determined to be one hundred six beats per minute, which occurs at approximately fifty two minutes. The minimum signal value 36 appearing in the reconstructed signal 34 is determined to be fifty seven beats per minute, occurring at approximately thirty nine minutes. Contrasting these values with the maximum data point 4 and the minimum data point 3 obtained from the raw data 1 illustrated in
The approximation process can be performed with different values for the threshold K resulting in varying degrees of accuracy in the reconstructed signals. Table 3 summarizes the error, or absolute value of the difference, between the recorded assessment sheet value and the value of the reconstructed signal at corresponding times as a function of the threshold K. As the threshold K increases, the error increases. This characteristic is illustrated in the calculation of the root-sum-squared (RSS) error appearing at the bottom of each column. The RSS error is a measure of the ensemble effect of the errors which occur for each time specified within Table 3 for the threshold value K.
Error Between Recording in Assessment Sheet and the Reconstructed Signal for the Specified Threshold
Another feature of the present invention is the ability to automatically filter repeated values or results. For nonstationary data, the application of the threshold approach using discrete wavelet transforms makes possible a reduction in artifacts appearing in the raw signal, thereby beneficially reducing the overall quantity of data that is transferred and stored in the long term archive. Data that is repeating can be automatically filtered without applying a heuristic approach such as the comparison of new values with previous values.
The data shown in
Oxygen regulation is a manually controlled process, thus, the reduction in oxygen support is typically done in steps or stages, during which time patients are assessed based on their ability to maintain proper blood oxygenation levels SpO2 (typically in excess of ninety five percent). The temporal profile of the FiO2 parameter is typically set in a series of step functions 41, 42 and 43, etc., in which levels are reduced over time. Normally, this parameter is updated in the assessment sheet at the time of each change. The bedside monitor provides an updated value, albeit a constant one, throughout the course of weaning.
The wavelet coefficients 44 depicted in
Therefore, the original signal 40, represented by two hundred twenty six data points, can be recreated without loss using fifteen wavelet coefficients, as illustrated in
This same approach applies for analogous reasons to other bedside monitor data signals such as, for example, the mandatory respiratory rate setting. For patients using mechanical ventilation, mandatory or machine initiated breathing is also adjusted in direct proportion to the patient's ability to sustain spontaneous breaths.
Because wavelet coefficients represent details in the signal being transformed, they are a means for automatically detecting a change in the level of the raw signal data. This characteristic may be utilized in processing noisy signals where a small threshold can be used to filter out the ambient noise, leaving the larger coefficients that are typically associated with significant changes in the raw signal level.
This noise suppression feature is illustrated in
In
The patient telemetry server 62 also produces an ASCII (American Standard Code for Information Interchange) data stream 63 which is sent directly to a communications interface of a discrete wavelet transform (DWT) processor 64. This ASCII data stream carries patient parameter data represented by a medically significant signal from a patient monitoring device attached to a patient. The DWT processor 64 is responsible for performing a transform of the patient parameter data signal to a generate a data stream consisting of a plurality of components, e.g. a DWT coefficient data stream 65, representing the patient parameter data. The component coefficient data is subsequently stored together with the real time assessment sheet data 66 in the real time data store 67.
Typically, the patient telemetry server 62, the DWT processor 64 and the real time data store 67 are implemented in a computer or processor system. As used herein, a processor operates under the control of an executable application to (a) receive information from an input information device, (b) process the information by manipulating, analyzing, modifying, converting and/or transmitting the information, and/or (c) route the information to an output information device. A processor may use, or comprise the capabilities of, a controller or microprocessor, for example. The processor may operate with a display processor or generator. A display processor or generator is a known element for generating signals representing display images or portions thereof. A processor and a display processor comprises any combination of, hardware, firmware, and/or software.
An executable application as used herein comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, including those of an operating system, healthcare information system or other information processing system, for example, in response user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data or parameters, performing operations on received input data or performing functions in response to received input parameters, and providing resulting output parameters. A user interface comprises one or more display images, generated by the display processor under the control of the processor, enabling user interaction with a processor or other device.
The DWT processor 64 may also perform one or more mathematical operations on the raw signal data 60 in addition to the discrete wavelet transform: e.g. a Fast Fourier Transform (FFT), a Discrete Cosine Transform (DCT), signal averaging, and/or Kalman filtering. The DWT processor 64 may further filter the component (e.g. DWT coefficients) data stream based on criteria, e.g. a threshold, to reduce noise level or to identify a desired signal artifact. As described above, the criteria, e.g. threshold, may be based on a statistical significance calculation. Also as described above, this filtering may result in excluding components (e.g. DWT coefficients) from the data stream. This filtered data stream may be stored in the real time data store 67.
The telemetry reconstruction processor 70 performs the inverse wavelet transform on the retrieved coefficients and permits the user to view the reassembled or approximated data signal via the charting and display tools 71, which may operate as a display processor for presenting an image of the signal for the user. Telemetry data reconstruction using the coefficients 65 is possible because the Haar matrix (or set of Haar matrices) is advantageously universal. Regardless of the particular signal data, the Haar matrix has the same form. Therefore, the Haar matrix or matrices may be permanently stored in the telemetry reconstruction processor 70 or the long term medical repository 69. In practice, one of the master files in the system 68 can be a Wavelet Transform Master (WTM) file, which is the collection of Haar matrices of various sizes used to accommodate wavelet quantities retrieved for signal reconstruction. Thus, the wavelet coefficients 65 are the data components that need to be supplied from the real time data store 67 to the enterprise user.
Operation of interface 72 begins by selecting a patient identification (PID), which may, for example, be a medical record number, for a desired patient from a list in window 73. The patient identifier (PID) list window 73 is populated automatically based upon the patient data in the ASCII packets 63 retrieved from the patient telemetry server 62. Available patient identifiers are placed in the PID list window 73. Once a patient identifier 76 is selected, the associated physiological parameters being monitored on that patient are automatically listed in the parameter (PARM) list window 74. A desired physiological parameter is selected from the list in region 74 of the interface 72. In the example shown, the PID “400490” 76 and the physiological parameter “RESP” 75 have been highlighted or selected.
Activating the Start Query button 77 begins operation of the DWT processor 64 (
When the parameters are selected and/or specified, as described above, the Start DWT button 82 may be activated. In response, DWT processor 64 begins to calculate DWT coefficient data 65 based on the physiological parameter data samples in the ASCII packet signal 63 and to write these coefficients to the real time data storage location 67 (
The foregoing examples and descriptions are presented as illustrations of the present invention. In particular, the present invention lends itself to automation, and can be incorporated in a wide variety of data processing schemes using many different software implementations. A person of ordinary skill in the data processing field appreciates that numerous different approaches and techniques may accomplish the novel characteristics set forth herein without departing from the scope of the present invention. For example, one skilled in the art understands that all elements illustrated in
Claims
1. A processing system for patient medical parameters, comprising:
- a communication interface for acquiring patient parameter data comprising a medically significant signal from a patient monitoring device attached to a patient;
- a transform processor for converting said medically significant signal into a plurality of components using a transform;
- a filter for filtering said components to exclude components based on criteria to provide filtered components; and
- an inverse transform processor for inverse-transforming said filtered components to provide a representation of said signal.
2. A system according to claim 1, wherein said filter excludes particular components based on criteria including a statistical significance calculation and a predetermined threshold identifying at least one of, (a) a desired signal artifact and (b) a noise level.
3. A system according to claim 1, including
- a display processor for presenting said signal artifacts in an image representation to a user; and,
- a threshold selection processor enabling a user to selectively exclude an artifact from said medically significant signal.
4. A system according to claim 1, including a generator for creating data representing at least one displayed user interface image supporting user selection of:
- a patient;
- an associated particular patient parameter type; and
- an associated predetermined filtering criteria.
5. A system according to claim 1, wherein said filter excludes said components below a predetermined magnitude threshold.
6. A system according to claim 1, wherein said transform comprises at least one of: (a) a wavelet transform, (b) an FFT, (c) a DCT, (d) signal averaging, and (e) Kalman filtering.
7. A system according to claim 1, wherein said components include time and magnitude domain representative coefficients, wherein signal magnitude and temporal location are substantially preserved through the transformation process.
8. A system according to claim 2, further comprising a data store, the data store being adapted to store at least one of, (a) all parameter data generated by a patient monitoring device, (b) selected patient parameter data generated by the patient monitoring device, and (c) components used by the inverse transform processor to provide a representation of the medically significant signal.
9. A system according to claim 8, wherein the selected parameter data generated by the patient monitoring device is used to create an assessment sheet.
10. A system according to claim 9, wherein the representation of the medically significant signal created by the inverse transform processor substantially includes at least the selected parameter data present in the assessment sheet.
11. A system according to claim 10, wherein said filter excludes particular components representing temporally adjacent patient parameter data having substantially identical values.
12. A method for processing raw data streams produced by a biomedical monitoring device, wherein the raw data streams are processed by performing the following:
- inputting the raw data streams to a telemetry server having processing means capable of dividing each raw data stream into at least a first component and a second component;
- forwarding the first component to a storage device capable of storing substantially all data values present in the raw data stream; and
- forwarding the second component to a transform processor capable of assigning a relative significance to each data value present in the raw data stream.
13. A method according to claim 12, further comprising the transform processor generating a plurality of coefficients, each coefficient characterizing a magnitude of a data value present in the raw data stream.
14. A method according to claim 13, further comprising forwarding substantially all of the coefficients to a threshold processor for setting a threshold value that excludes a set of coefficients having a relative contribution to the data values present in the raw data stream that is less than the threshold value.
15. A method according to claim 14, further comprising the threshold processor setting a threshold value that excludes a set of coefficients representing temporally adjacent substantially repeating raw data values.
16. A method according to claim 14, further comprising the threshold processor setting a threshold value that preserves a set of coefficients representing data values present in the raw data stream that indicates a significant characteristic of the data stream.
17. A method of reducing data transmission and storage requirements in a telemetry processing system used for collecting and displaying a nonstationary event, comprising:
- collecting substantially all data values present in a signal that characterizes the nonstationary event;
- assigning a relative contribution value to each data value present in the signal;
- excluding each data value assigned a relative contribution value having a relatively small effect on the signal, thereby creating a set of excluded data values;
- including each data value assigned a relative contribution value having a relatively large effect on the signal, thereby creating a set of included data values; and
- constructing an approximation of the signal using the set of included data values.
18. A method according to claim 17, further comprising excluding each relative contribution value representing a data value having a magnitude that is substantially identical to a magnitude of an adjacent data value.
19. A method according to claim 17, further comprising extracting a characteristic of the nonstationary event by identifying relative contribution values associated with the characteristic.
20. A method according to claim 19, further comprising:
- storing the set of included relative contribution values in a long term data storage repository; and
- discarding the set of excluded relative contribution values, thereby reducing an absolute storage requirement necessary to reconstruct the approximation of the signal.
Type: Application
Filed: Oct 21, 2004
Publication Date: Jun 2, 2005
Inventor: John Zaleski (West Brandywine, PA)
Application Number: 10/970,565