VISUALIZATION OF MEDICAL DEVICE EVENT PROCESSING

Systems, apparatus, instructions, and methods for medical machine time-series event data processing are disclosed. An example apparatus includes a data processor to process one-dimensional data captured over time with respect to patient(s). The example apparatus includes a visualization processor to transform the processed data into graphical representations and to cluster the graphical representations including the first graphical representation into at least first and second blocks arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example apparatus includes an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent arises from U.S. Provisional Patent Application Ser. No. 62/838,022, which was filed on Apr. 24, 2019. U.S. Provisional Patent Application Ser. No. 62/838,022 is hereby incorporated herein by reference in its entirety. Priority to U.S. Provisional Patent Application Ser. No. 62/838,022 is hereby claimed.

FIELD OF THE DISCLOSURE

This disclosure relates generally to medical data visualization and, more particularly, to visualization of medical device event processing.

BACKGROUND

The statements in this section merely provide background information related to the disclosure and may not constitute prior art.

Healthcare environments, such as hospitals or clinics, include information systems, such as hospital information systems (HIS), radiology information systems (RIS), clinical information systems (CIS), and cardiovascular information systems (CVIS), and storage systems, such as picture archiving and communication systems (PACS), library information systems (LIS), and electronic medical records (EMR). Information stored can include patient medication orders, medical histories, imaging data, test results, diagnosis information, management information, and/or scheduling information, for example. A wealth of information is available, but the information can be siloed in various separate systems requiring separate access, search, and retrieval. Correlations between healthcare data remain elusive due to technological limitations on the associated systems.

Further, when data is brought together for display, the amount of data can be overwhelming and confusing. Such data overload presents difficulties when trying to display, and competing priorities put a premium in available screen real estate. Existing solutions are deficient in addressing these and other related concerns.

BRIEF DESCRIPTION

Systems, apparatus, instructions, and methods for medical machine time-series event data processing are disclosed.

Certain examples provide a time series data visualization apparatus including a data processor to process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference. The example apparatus includes a visualization processor to transform the processed data into a plurality of graphical representations visually indicating a change over time in the data and to cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example apparatus includes an interface builder to construct a graphical user interface to display the at least first and second blocks of graphical representations. The example apparatus includes an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

Certain examples provide a tangible computer-readable storage medium including instructions that, when executed, cause at least one processor to at least: process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference; transform the processed data into a plurality of graphical representation visually indicating a change over time in the data; cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion, the first block, the second block, and the indicator to be displayed via a graphical user interface; and facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

Certain examples provide a computer-implemented method for medical machine time-series event data processing and visualization. The example method includes processing one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference. The example method includes transforming the processed data into a plurality of graphical representations visually indicating a change over time in the data. The example method includes clustering the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example method includes facilitating interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 is a block diagram of an example system including medical devices and associated monitoring devices for a patient.

FIG. 2 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to predict future events from the data.

FIG. 3 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to detect events that have occurred.

FIGS. 4A-4D depict example artificial intelligence models.

FIG. 5 illustrates an example visualization of data provided from multiple sources.

FIGS. 6-10E illustrate example interfaces displaying one-dimensional patient data and associated analysis for interaction and processing.

FIG. 11 illustrates an example time series data visualization system.

FIGS. 12-14 illustrate flow diagrams of example methods to process one-dimensional time series data using the example system(s) of FIGS. 1-4 and/or 11.

FIG. 15 is a block diagram of an example processor platform capable of executing instructions to implement the example systems and methods disclosed and described herein.

DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS

In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific examples that may be practiced. These examples are described in sufficient detail to enable one skilled in the art to practice the subject matter, and it is to be understood that other examples may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the subject matter of this disclosure. The following detailed description is, therefore, provided to describe an exemplary implementation and not to be taken as limiting on the scope of the subject matter described in this disclosure. Certain features from different aspects of the following description may be combined to form yet new aspects of the subject matter discussed below.

When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “first,” “second,” and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. As the terms “connected to,” “coupled to,” etc. are used herein, one object (e.g., a material, element, structure, member, etc.) can be connected to or coupled to another object regardless of whether the one object is directly connected or coupled to the other object or whether there are one or more intervening objects between the one object and the other object.

As used herein, the terms “system,” “unit,” “module,” “engine,” etc., may include a hardware and/or software system that operates to perform one or more functions. For example, a module, unit, or system may include a computer processor, controller, and/or other logic-based device that performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium, such as a computer memory. Alternatively, a module, unit, engine, or system may include a hard-wired device that performs operations based on hard-wired logic of the device. Various modules, units, engines, and/or systems shown in the attached figures may represent the hardware that operates based on software or hardwired instructions, the software that directs hardware to perform the operations, or a combination thereof.

As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.

The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects, and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities, and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.

In addition, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.

Medical data can be obtained from imaging devices, sensors, laboratory tests, and/or other data sources. Alone or in combination, medical data can assist in diagnosing a patient, treating a patient, forming a profile for a patient population, influencing a clinical protocol, etc. However, to be useful, medical data must be organized properly for analysis and correlation beyond a human's ability to track and reason. Computers and associated software and data constructs can be implemented to transform disparate medical data into actionable results.

For example, imaging devices (e.g., gamma camera, positron emission tomography (PET) scanner, computed tomography (CT) scanner, X-Ray machine, magnetic resonance (MR) imaging machine, ultrasound scanner, etc.) generate two-dimensional (2D) and/or three-dimensional (3D) medical images (e.g., native Digital Imaging and Communications in Medicine (DICOM) images) representative of the parts of the body (e.g., organs, tissues, etc.) to diagnose and/or treat diseases. Other devices such as electrocardiogram (ECG) systems, echoencephalograph (EEG), pulse oximetry (SpO2) sensors, blood pressure measuring cuffs, etc., provide one-dimensional waveform and/or time series data regarding a patient.

Acquisition, processing, analysis, and storage of time-series data (e.g., one-dimensional waveform data, etc.) obtained from one or more medical machines and/or devices play an important role in diagnosis and treatment of patients in a healthcare environment. Devices involved in the workflow can be configured, monitored, and updated throughout operation of the medical workflow. Machine learning can be used to help configure, monitor, and update the medical workflow and devices.

Machine learning techniques, whether deep learning networks or other experiential/observational learning system, can be used to characterize and otherwise interpret, extrapolate, conclude, and/or complete acquired medical data from a patient, for example. Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using a deep graph with multiple processing layers including linear and non-linear transformations. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis. Using a multilayered architecture, machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction.

To be accurate and robust, machine learning networks must be trained and tested using data that is representative of data that will be processed by the deployed network model. Data that is irrelevant, inaccurate, and/or incomplete can result in a deep learning network model that provides an incorrect output in response to data input. Certain examples provide top-down systems and associated methods to capture and organize data (e.g., group, arrange with respect to an event, etc.), remove outliers, and/or otherwise align data with respect to a clinical event, trigger, other occurrence, etc., to form a ground truth for training, testing, etc., of a learning network model.

Certain examples provide automated processing and visualization of data for a group of patients and enable removal of outliers and drilling down into the data to determine patterns, trends, causation, individual patient data, etc. Relevant data can be annotated quickly to form ground truth data for training of one or more artificial intelligence models. For example, a plurality of one-dimensional signal waveforms can be stacked and/or otherwise organized for a patient, and patients can be stacked and/or otherwise organized with respect to each other and with respect to one or more events, criterion, etc. By organizing patients and their associated signals with respect to each other based on one or more events, criterion, etc., different outliers emerge from the group depending on the event, criterion, etc., used to organize the patients. As such, outliers eliminated from the data set can vary depending upon the event, criterion, etc.

Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term “deep learning” is a machine learning technique that utilizes multiple data processing layers to recognize various structures in data sets and classify the data sets with high accuracy. A deep learning network (DLN), also referred to as a deep neural network (DNN), can be a training network (e.g., a training network model or device) that learns patterns based on a plurality of inputs and outputs. A deep learning network/deep neural network can be a deployed network (e.g., a deployed network model or device) that is generated from the training network and provides an output in response to an input.

The term “supervised learning” is a deep learning training method in which the machine is provided already classified data from human sources. The term “unsupervised learning” is a deep learning training method in which the machine is not given already classified data but makes the machine useful for abnormality detection. The term “semi-supervised learning” is a deep learning training method in which the machine is provided a small amount of classified data from human sources compared to a larger amount of unclassified data available to the machine.

The term “convolutional neural networks” or “CNNs” are biologically inspired networks of interconnected data used in deep learning for detection, segmentation, and recognition of pertinent objects and regions in datasets. CNNs evaluate raw data in the form of multiple arrays, breaking the data in a series of stages, examining the data for learned features.

The term “transfer learning” is a process of a machine storing the information used in properly or improperly solving one problem to solve another problem of the same or similar nature as the first. Transfer learning may also be known as “inductive learning”. Transfer learning can make use of data from previous tasks, for example.

The term “active learning” is a process of machine learning in which the machine selects a set of examples for which to receive training data, rather than passively receiving examples chosen by an external entity. For example, as a machine learns, the machine can be allowed to select examples that the machine determines will be most helpful for learning, rather than relying only an external human expert or external system to identify and provide examples.

The term “computer aided detection” or “computer aided diagnosis” refer to computers that analyze medical data to suggest a possible diagnosis.

Deep learning is a class of machine learning techniques employing representation learning methods that allows a machine to be given raw data and determine the representations needed for data classification. Deep learning ascertains structure in data sets using backpropagation algorithms which are used to alter internal parameters (e.g., node weights) of the deep learning machine. Deep learning machines can utilize a variety of multilayer architectures and algorithms. While machine learning, for example, involves an identification of features to be used in training the network, deep learning processes raw data to identify features of interest without the external identification.

Deep learning in a neural network environment includes numerous interconnected nodes referred to as neurons. Input neurons, activated from an outside source, activate other neurons based on connections to those other neurons which are governed by the machine parameters. A neural network behaves in a certain manner based on its own parameters. Learning refines the machine parameters, and, by extension, the connections between neurons in the network, such that the neural network behaves in a desired manner.

Deep learning that utilizes a convolutional neural network segments data using convolutional filters to locate and identify learned, observable features in the data. Each filter or layer of the CNN architecture transforms the input data to increase the selectivity and invariance of the data. This abstraction of the data allows the machine to focus on the features in the data it is attempting to classify and ignore irrelevant background information.

Deep learning operates on the understanding that many datasets include high level features which include low level features. While examining an image, for example, rather than looking for an object, it is more efficient to look for edges which form motifs which form parts, which form the object being sought. These hierarchies of features can be found in many different forms of data such as speech and text, etc.

Learned observable features include objects and quantifiable regularities learned by the machine during supervised learning. A machine provided with a large set of well classified data is better equipped to distinguish and extract the features pertinent to successful classification of new data.

A deep learning machine that utilizes transfer learning may properly connect data features to certain classifications affirmed by a human expert. Conversely, the same machine can, when informed of an incorrect classification by a human expert, update the parameters for classification. Settings and/or other configuration information, for example, can be guided by learned use of settings and/or other configuration information, and, as a system is used more (e.g., repeatedly and/or by multiple users), a number of variations and/or other possibilities for settings and/or other configuration information can be reduced for a given situation.

An example deep learning neural network can be trained on a set of expert classified data, for example. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning. During the stage of supervised learning, the neural network can be tested whether the desired behavior has been achieved.

Once a desired neural network behavior has been achieved (e.g., a machine has been trained to operate according to a specified threshold, etc.), the machine can be deployed for use (e.g., testing the machine with “real” data, etc.). During operation, neural network classifications can be confirmed or denied (e.g., by an expert user, expert system, reference database, etc.) to continue to improve neural network behavior. The example neural network is then in a state of transfer learning, as parameters for classification that determine neural network behavior are updated based on ongoing interactions. In certain examples, the neural network can provide direct feedback to another process. In certain examples, the neural network outputs data that is buffered (e.g., via the cloud, etc.) and validated before it is provided to another process.

Deep learning machines can utilize transfer learning when interacting with physicians to counteract the small dataset available in the supervised training. These deep learning machines can improve their computer aided diagnosis over time through training and transfer learning. However, a larger dataset results in a more accurate, more robust deployed deep neural network model that can be applied to transform disparate medical data into actionable results (e.g., system configuration/settings, computer-aided diagnosis results, image enhancement, etc.).

In certain examples, visualization of data can be driven by an artificial intelligence framework, and the artificial intelligence framework can provide data for visualization, evaluation, and action. Certain examples provide a framework including a) a computer executing one or more deep learning (DL) models and hybrid deep reinforcement learning (RL) models trained on aggregated machine timeseries data converted into the single standardized data structure format and in an ordered arrangement per patient to predict one or more future events and summarize pertinent past machine events related to the predicted one or more future machine events on a consistent input time series data of a patient having the standardized data structure format; and b) a healthcare provider-facing interface of an electronic device for use by a healthcare provider treating the patient configured to display the predicted one or more future machine events and the pertinent past machine events of the patient.

In certain examples, machine signals, patient physiological signals, and a combination of machine and patient physiological signals provide improved prediction, detection, and/or classification of events during a medical procedure. The three data contexts are represented in Table 1 below, associated with example artificial intelligence models that can provide a prediction, detection, and/or classification using the respective data source. Data-driven predictions of events related to a medical treatment/procedure help to lower healthcare costs and improve the quality of care. Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for prediction of such events. Similarly, data-driven detection and classification of events related to a patient and/or machine helps to lower healthcare costs and improve the quality of care. Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for detection and classification of such events.

As shown below, machine data, patient monitoring data, and a combination of machine and monitoring data can be used with one or more artificial intelligence constructs to form one or more predictions, detections, and/or classifications, for example.

Data Source Prediction/Detection/Classification Machine Data DL Hybrid RL DL + Hybrid RL Monitoring (Patient data) DL Hybrid RL DL + Hybrid RL Machine + Monitoring DL Data Hybrid RL DL + Hybrid RL

Table 1. Data source and associated prediction, detection, and/or classification model examples.

Certain examples deploy learned models in a live system for patient monitoring. Training data is to match collected data, so if live data is being collected during surgery, for example, the model is to be trained on live surgical data also. Training parameters can be mapped to deployed parameters for live, dynamic delivery to a patient scenario (e.g., in the operating room, emergency room, etc.). Also, one-dimensional (1D) time series event data (e.g., ECG, EEG, O2, etc.) is processed differently by a model than a 2D or 3D image. 1D time series event data can be aggregated and processed, for example.

Thus, as shown below, one or more medical devices can be applied to extract time-series data with respect to a patient, and one or more monitoring devices can capture and process such data. Benefits to one-dimensional, time-series data modeling include identification of more data-driven events to avoid false alarms (e.g., avoiding false alarm fatigue, etc.), provide quality event detection, etc. Other benefits include improved patient outcomes. Cost-savings can also be realized, such as reducing cost to better predict events such as when to reduce gas, when to take a patient off an oxygen ventilator, when to transfer a patient from operating room (OR) to other care, etc.

Other identification methods are threshold based rather than personalized. Certain examples provide personalized modeling, based on a patient's own vitals, machine data from a healthcare procedure, etc. For example, for patient heart rate, a smaller person has a different rate than heavier built person. As such, alarms can differ based on the person rather than conforming to set global thresholds. A model, such as a DL model, etc., can determine or predict when to react to an alarm versus turn the alarm off, etc. Certain examples can drive behavior, configuration, etc., of another machine (e.g., based on physiological conditions, a machine can send a notification to another machine to lower anesthesia, reduce ventilator, etc.; detect ventilator dystrophy and react to it, etc.).

As shown in an example system 100 of FIG. 1, one or more medical devices 110 (e.g., ventilator, anesthesia machine, intravenous (IV) infusion drip, etc.) administer to a patient 120 while one or more monitoring devices 130 (e.g., electrocardiogram (ECG) sensor, blood pressure sensor, respiratory monitor, etc.) gather data regarding patient vitals, patient activity, medical device operation, etc. Such data can be used to train an AI model, can be processed by a trained AI model, etc.

Certain examples provide systems and methods for deep learning and hybrid reinforcement learning-based event prediction, detection, and/or classification. For example, as shown in an example system 200 of FIG. 2, machine data 210 and physiological (e.g., vitals, etc.) data 220 from one or more medical devices 230, mobile digital health monitors 240, one or more diagnostic cardiology (DCAR) devices 250, etc., is provided in a data stream 260 (e.g., continuous streaming, live streaming, periodic streaming, etc.) to a preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data stream 260, for example. The pre-processed data is provided from the preprocessor 270 to an event predictor 280, which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to predict future events from the preprocessed data. The event predictor 280 forms an output 290 including one or more insights, alerts, actions, etc., for a system, machine, user, etc. For example, the event predictor 280 can predict, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and an actionable alert can be provided by the output 290 to adjust an IV drip, activate a sensor and/or other monitor, change a medication dosage, obtain an image, send data to another machine to adjust its settings/configuration, etc.

In certain examples, detection and event classification can also be facilitated using deep learning and hybrid reinforcement learning. FIG. 3 illustrates an example system 300 in which the machine data 210 and the physiological (e.g., vitals, etc.) data 220 from the one or more medical devices 230, mobile digital health monitors 240, one or more diagnostic cardiology (DCAR) devices 250, etc., is provided offline 310 (e.g., once a study and/or other exam has been completed, periodically at a certain time/interval or based on a current size of data collection, etc.) to the preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data set 310, for example. The pre-processed data is provided from the preprocessor 270 to an event detector 320, which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to detect and classify events from the preprocessed data. The event detector 320 forms an annotation output 330 including labeled events, etc. For example, the event detector 320 can detect and classify, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and the event(s) can then be labeled to be used as ground truth 330 for training of an AI model, verification by a healthcare professional, adjustment of machine settings/configuration, etc.

In certain examples, a convolution neural network (CNN) and recurrent neural network (RNN) can be used alone or in combination to process data and extract event prediction. Other machine learning/deep learning/other artificial intelligence networks can be used alone or in combination.

Convolutional neural networks are deep artificial neural networks that are used to classify images (e.g., associate a name or label with what object(s) are identified in the image, etc.), cluster images by similarity (e.g., photo search, etc.), and/or perform object recognition within scenes, for example. CNNs can be used to instantiate algorithms that can identify faces, individuals, street signs, tumors, platypuses, and/or many other aspects of visual data, for example. FIG. 4A illustrates an example CNN 400 including layers 402, 404, 406, and 408. The layers 402 and 404 are connected with neural connections 403. The layers 404 and 406 are connected with neural connections 405. The layers 406 and 408 are connected with neural connections 407. Data flows forward via inputs 401 from the input layer 402 to the output layer 408 and to an output 409.

The layer 402 is an input layer that, in the example of FIG. 4A, includes a plurality of nodes. The layers 404 and406 are hidden layers and include, the example of FIG. 4A, a plurality of nodes. The neural network 400 may include more or less hidden layers 404, 406 than shown. The layer 408 is an output layer and includes, in the example of FIG. 4A, a node with an output 409. Each input 401 corresponds to a node of the input layer 402, and each node of the input layer 402 has a connection 403 to each node of the hidden layer 404. Each node of the hidden layer 404 has a connection 405 to each node of the hidden layer 406. Each node of the hidden layer 406 has a connection 407 to the output layer 408. The output layer 408 has an output 409 to provide an output from the example neural network 400.

Of connections 403, 405, and 407 certain example connections may be given added weight while other example connections may be given less weight in the neural network 400. Input nodes are activated through receipt of input data via inputs, for example. Nodes of hidden layers 404 and 406 are activated through the forward flow of data through the network 400 via the connections 403 and 405, respectively. The node of the output layer 408 is activated after data processed in hidden layers 404 and 406 is sent via connections 407. When the output node of the output layer 408 is activated, the node outputs an appropriate value based on processing accomplished in hidden layers 404 and 406 of the neural network 400.

Recurrent networks are a powerful set of artificial neural network algorithms especially useful for processing sequential data such as sound, time series (e.g., sensor) data or written natural language, etc. A recurrent neural network can be implemented similar to a CNN but including one or more connections 412 back to a prior layer, such as shown in the example RNN 410 of FIG. 4B.

A reinforcement learning (RL) model is an artificial intelligence model in which an agent takes an action in an environment to maximize a cumulative award. FIG. 4C depicts an example RL network 420 in which an agent 422 operates with respect to an environment 424. An action 421 of the agent 422 results in a change in a state 423 of the environment 424. Reinforcement 425 is provided to the agent 422 from the environment 424 to provide a reward and/or other feedback to the agent 422. The state 423 and reinforcement 425 are incorporated into the agent 422 and influence its next action, for example.

Hybrid Reinforcement Models include a Deep Hybrid RL, for example. Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) and/or maximize along a particular dimension over many steps/actions. For example, an objective can include to maximize points won in a game over many moves. Reinforcement learning models can start from a blank slate, and, under the right conditions, the model can achieve superior performance. Like a child incentivized by spankings and candy, these algorithms are penalized when they make the wrong decisions and rewarded when they make the right decisions to provide reinforcement. A hybrid deep reinforcement network can be configured as shown in the example 430 of FIG. 4D.

As shown in the example 430 of FIG. 4D, a policy 432 drives model-free deep reinforcement learning algorithm(s) 434 to learn tasks associated with processing of data, such as 1D waveform data, etc. Results of the model-free RL algorithm(s) 434 provide feedback to the policy 432 and generate samples 438 for model-based reinforcement algorithm(s) 436. The model-based RL algorithm(s) 430 operates according to the policy 432 and provides feedback to the policy 432 based on samples from the model-free RL algorithm(s) 434. Model-based RL algorithm(s) 436 are more sample-efficient and more flexible than task-specific policy(-ies) 432 learned with model-free RL algorithm(s) 434, for example. However, asymptotic performance of model-based RL algorithm(s) 436 is usually worse than model-free RL algorithm(s) 434 due to model bias, for example. For example, model-free RL algorithm(s) 434 are not limited by model accuracy and can therefore achieve better final performance, although at the expense of higher sample complexity. The hybrid deep RL models combined model-based 436 and model-free 434 RL algorithms (e.g., model-based algorithm(s) 436 to enable supervised initialization of policy 432 that can be fine-tuned with the model-free algorithm(s) 434, etc.) to accelerate model-free learning and improved sample efficiency, for example.

Certain examples apply hybrid RL models to facilitate determination and control of input and provide an ability to separate and/or combine information including ECG, spO2, blood pressure, other parameters. Early warning signs of a condition or health issue can be determined and used to alert a patient, clinician, other system, etc. A normal/baseline value can be determined, and deviation from the baseline (e.g., during the course of a surgical operation, etc.) can be determined. Signs of distress can be identified/predicted before an issue becomes critical. In certain examples, a look-up table can be provided to select one or more artificial intelligence networks based on particular available input and desired output. The lookup table can enable rule-based neural network selection to generate appropriate model(s), for example.

Other neural networks include transformer networks, graph neural networks, etc. A transformer or transformer network is a neural network architecture that transforms an input sequence to an output sequence using sequence transduction or neural machine translation (e.g., to process speech recognition, text-to-speech transformation, etc.), for example. The transformer network has memory to remember or otherwise maintain dependencies and connections (e.g., between sounds and words, etc.). For example, the transformer network can include a CNN with one or more attention models to improve speed of translation/transformation. The transformer can be implemented using a series of encoders and decoders (e.g., implemented using a neural network such as a feed forward neural network, CNN, etc., and one or more attention models, etc.). As such, the transformer network transforms one sequence into another sequence using the encoder(s) and decoder(s).

In certain examples, a transformer is applied to sequence and time series data. Compared with an RNN and/or long short-term memory (LSTM) model, the transformer has the following advantages. The transformer applies a self-attention mechanism that directly models relationships between all words in a sentence, regardless of their respective position. The transformer allows for significantly more parallelization. The transformer proposes to encode each position and applying the attention mechanism to relate two distant words of both the inputs and outputs with respect to itself, which then can be parallelized to accelerate training, for example. Thus, the transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude, for example.

A graph neural network (GNN) is a neural network that operates on a graph structure. In a graph, vertices or nodes are connected by edges, which can be directed or undirected edges, for example. The GNN can be used to classify nodes in the graph structure, for example. For example, each node in the graph can be associated with a label, and node labels can be predicted by the GNN without ground truth. Given a partially labeled graph, for example, labels for unlabeled nodes can be predicted.

Certain examples include aggregation techniques for detection, classification, and prediction of medical events based on DL processing of time series data. Different signals can be obtained, and different patterns can be identified for different circumstances. From a large aggregated data set, a subset can be identified and processed as relevant for a particular “-ology” or circumstance. Data can be partitioned into a relevant subset. For example, four different hospitals are collecting data, and the data is then partitioned to focus on cardiac data, etc. Partitioning can involve clustering, etc. Metadata can be leveraged, and data can be cleaned to reduce noise, artifacts, outliers, etc. Missing data can be interpolated and/or otherwise generated using generative adversarial networks (GANs), filter, etc. Detection occurs after the fact, while a prediction is determined before an event occurs. In certain examples, prediction occurs in real time (or substantially real time given system processing, storage, and data transmission latency) using available data.

Post-processing of predicted, detected, and/or classified events can include a dashboard visualization for detection, classification, and/or prediction. For example, post-processing can generate a visualization summarizing events. Post-processing can also generate notifications determined by detection, classification, and/or prediction, for example.

In certain examples, an algorithm can be used to select one or more machine learning algorithms to instantiate a network model based on aggregated pre-processed data and a target output. For example, a hybrid RL can be selected for decision making regarding which events to choose from a set of targeted events. A transformer network can be selected for parallel processing and accelerating event generation, for example. A graph neural network can be selected for interpreting targeted events and relations exploration, for example. The neural network and/or other AI model generated by the selected algorithm can operate on the pre-processed data to generate summarized events, etc.

In certain examples, data can be pre-processed according to one or more sequential stages to aggregate the data. Stages can include data ingestion and filtration, imputation, aggregation, modeling, and recommendation. For example, data ingestion and filtration can include one or more devices connected to a patient and used to actively capture and filter data related to the patient and/or device operation. For example, a patient undergoing surgery is equipped with an anesthetic device and one or more monitoring devices capturing one or more of the patient's vitals at a periodic interval. The anesthetic device can be viewed as a source of machine events (acted upon the patient), and the captured vitals can be treated as a source of patient data, for example.

FIG. 5 illustrates an example visualization 500 of data provided from multiple sources including, an anesthetic device, a monitoring device, etc. Such a stream of data can have artifacts due to one more issues occurring during and/or after acquisition of data. For example, heart rate and/or ST segment errors can occur due to electrocautery interference, patient movement, etc. Oxygen saturation measurement errors can occur due to dislocation of a sensor, vasopressor use, etc. Non-invasive blood pressure errors can be caused by leaning on the pressure cuff, misplacement of the cuff, etc. Such artifacts are filtered from the stream using one or more statistics (e.g., median, beyond six sigma range, etc.) that can be obtained from the patient (e.g., current) and/or from prior records of patients who have undergone a similar procedure and may have involved one or more normalization techniques with respect to age, gender, weight, body type, etc.

In certain examples, the data may have some observation missing and/or removed during a filtration process, etc. This missing information can be imputed with data before being used for training a neural network model, etc. The data can be imputed using one or an ensemble of imputation methods to better represent the missing value. For example, imputation can be performed using a closest fill (e.g., using a back or forward fill with the value closest with respect to time, etc.), collaborative filtering by determining another input that could be a possible candidate, using a generative method trained with data from large sample of patients, etc.

In certain examples, a captured stream of data may involve aggregation before being consumed in downstream process(es). Patient data can be aggregated based on demographic (e.g., age, sex, income level, marital status, occupation, race, etc.), occurrence of a specific medical condition, etc. One or more aggregation methods can be applied to the data, such as K-means/medoids, Gaussian mixture models, density-based aggregation, etc. Aggregated data can be analyzed and used to classify/categorize a patient to determine a relevant data set for training and/or testing of an associated neural network model, for example.

For example, using K-means/medoids, data can be clustered according to certain similarity. Medoids are representative objects of a data set or a cluster with a data set whose average dissimilarity to all the objects in the cluster is minimal. A cluster refers to a collection of data points aggregated together because of certain similarities. A target number k can be defined, which refers to a number of centroids desired in the dataset. A centroid is an imaginary or real location representing a center of the cluster. Every data point is allocated to each of the clusters by reducing an in-cluster sum of squares, for example. As such, a K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible. The “means” in the K-means refers to an averaging of the data; that is, finding the centroid. In a similar approach, a “median” can be used instead of the middle point. A “goodness” of a given value of k can be assessed with methods such as a silhouette method, Elbow analysis, etc.

In certain examples, a Gaussian mixture model (GMM) is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. A Gaussian mixture model can be viewed as generalized k-means clustering to incorporate information about covariance structure of the data as well as centers of latent Gaussians associated with the data. The generalization can be thought of in the shape the clusters are formed, which in case of GMMs are arbitrary shapes determined by Gaussian parameters of the distribution, for example.

Density-based spatial clustering of applications with noise (DBSCAN) is a data clustering algorithm that can be used in data mining and machine learning. Based on a set of points (e.g., in a bi-dimensional space), DBSCAN groups together points that are close to each other based on a distance measurement (e.g., Euclidean distance, etc.) and a minimum number of points. DBSCAN also marks as outliers points that are in low-density regions. Using DBSCAN involves two control parameters, Epsilon(distance) and minimum points to form a cluster, for example. DBSCAN can be used for situations in which there are highly irregular shapes that are not processable using a mean/centroid-based method, for example.

In certain examples, a recommender system or a recommendation system is a subclass of information filtering system that seeks to predict the “rating” or “preference” a user would give to an item. The recommender system operates on an input to apply collaborative filtering and/or content-based filtering to generate a predictive or recommended output. For example, collaborative filtering builds a model based on past behavior as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item to recommend additional items with similar properties. In the healthcare context, such collaborative and/or content-based filtering can be used to predict and/or categorize an event and/or classify a patient based on the event(s), etc.

Thus, certain examples provide a plurality of methods that can be used to determine a cohort to which the patient belongs. Based on the cohort, relevant samples can be extracted to train and inference a model for a given patient. For example, when looking at a particular patient and trying to inference for the particular patient, an appropriate cohort can be determined to enable retrieval of an associated subset of records previously obtained and/or from a live stream of data. In certain examples, a top N records are used for training and inferencing.

In certain examples, patients and associated patient data can be post-processed. For example, given that a clinician attends to more than one patient at a given point of time, patients and associated data can be summarized, prioritized, and grouped for easy and quick inferencing of events/outcomes.

For example, patients can be prioritized based on a clinical outcome determined according to one or more pre-determined rules. Patients can also be prioritized based on variance of vitals from a nominal value of the cohort to which the patient belongs, where the cohort is determined by one or more aggregation methods, for example.

Additionally, aggregation can be used to provide a high-level summarization of one or more patients being treated. Summarization can also involve aggregation of one or more events occurring in parallel for ease of interpretability. This process of summarization can also be modeled as a learned behavior based on the learning of how a clinician prefers to look at the summarization, for example.

As such, trained, deployed AI models can be applied to 1D patient data to convert the patient time series data into a visual indication of a comparative value of the data. For example, processing the 1D time series patient data using an AI model, such as one or more models disclosed above, quantifies, qualifies, and/or otherwise compares the data to a normal value or values, a threshold, a trend, other criterion(-ia) to generate a color-coded, patterned, and/or shaded representation of the underlying time series (e.g., waveform, etc.) data. Data can be clustered for a particular patient, and patients can be clustered for a particular group, such as a hospital, department, ward, clinician, office, enterprise, condition, etc.

Using the prioritization, patient(s) and event(s) can be determined from the group of available patients and events for which a clinician and/or healthcare system/device is to be notified for immediate attention, for example. In certain examples, a visualization can be generated from the prioritized data to enable understandable, actionable, display and interaction with the data.

Review of large data sets across multiple patients is time-consuming and tedious. There can be large volumes of data, often with little context, and the data is to be used to train AI models for detection, classification, prediction, etc. Rather than a bottom up approach, which involves manual review of data for hours including garbage data (e.g., a machine was left on for hours generating garbage data, etc.) that is difficult to sort out from useful data, certain examples provide a top down approach through a “Christmas tree” display to visualize multiple criteria/events for multiple patients and easily, visually identify gross outliers when viewing the entire landscape via the visualization interface. The user interface view can be sorted and/or otherwise arranged according to different condition, location, demographic, other criterion, etc., to arrange patient segments accordingly. Each patient is represented by a block (also referred to as a cluster or set), and each line (also referred as a bar, strip, stripe, or segment) in the block represents a different 1D data point. A color/pattern/representation of that line conveys an indication of its value/relative value/urgency/categorization/etc. to allow a user to visually appreciate an impact/importance of that data element.

As such, certain examples provide an interactive graphical view to visualize patterns, abnormalities, etc., in a large data set across multiple patients, which transforms raw results into visually appreciable indicators. Using a graphical view helps to improve and further enable comparisons between patients, deviation from a reference or standard, identification of patterns, other comparative analysis, etc. In certain examples, a block of patient information can be magnified to drill down into particular waveforms, other particular data, etc., represented by the colored/patterned line(s) in the top level interface. Patterns of the visualization and/or underlying 1D data can be provided for display and interaction via the user interface, as input to another system for diagnosis, treatment, system configuration, stored, etc.

Thus, certain examples gather 1D time series (e.g., waveform) data from one or more medical devices (e.g., ECG, EEG, ventilator, etc.) and a patient via one or more monitoring devices. Physiological data and other 1D time series signals can be indicative of a physiological condition associated with a body part from which the data is obtained (e.g., because the signal corresponds to electrical activity of the body part, etc.). As such, the time series physiological signal data, machine data, etc., can be processed used by clinicians for decision making regarding a patient, medical equipment, etc. As shown in the example of FIG. 6, a variety of waveforms (e.g., ECG, heart rate (HR), respiratory gas movement, central venous pressure, arterial pressure, oxygen fraction, waveform capnography, etc.) can be captured with respect to a patient.

A data view, such as example data view 600, can be generated and provided for a particular patient from the gathered, processed data set, for example. In certain examples, the patient data can be normalized to provide a graphical representation of relative and/or other comparative values. For example, a normalized value can be converted from an alphanumeric value into a graphical representation of that value (e.g. a color, a pattern, a texture, etc.), and a group or set of values for a patient can be represented as a group or cluster of graphical representations (e.g., a set of colored lines, a combination of patterns and/or textures, etc.) in a block for that particular patient. Additionally, a graphical user interface can display and provide access to graphical representations for a set or group of patients shown together for visual comparison, interaction, individual processing, comparative processing, sorting, grouping, separation, etc. The graphical user interface (GUI) view of multiple patients can be organized/arranged according to one or more criterion (e.g., duration, location, condition, etc.).

In certain examples, such a GUI can arrange blocks or clusters of patient data such that each patient's block is distinct from other adjacent patient blocks. In certain examples, patient blocks or “cases” can be arranged around (e.g., anchored by, displayed with respect to, etc.) a normalization point or common event/threshold, such as an emergency start event, etc. For example, an occurrence of an emergency event, such as a stroke, heart attack, low blood pressure, low blood sugar, etc., can be indicated in each of a plurality of patients and used to normalize the patient data blocks with respect to that emergency event.

FIG. 7 illustrates an example graphical user interface 700 including an interactive block representation 710 of patient time series data. As shown in the example of FIG. 7, each band 720-728 in the block 710 corresponds to a particular parameter measured over time using the 1D time series data and transformed into a visual representation of the underlying data. A length of the block representation 710 can be used to identify an outlier, pattern, etc., in comparison to other patient blocks, for example. As shown in the example of FIG. 7, one or more signals such as electrical signals, gas flow rate and/or volume, liquid flow rate and/or volume, vibration, other mechanical parameter, etc., can be converted into a visual, unit-less representational band 720-728. In the representation 710 of FIG. 7, the data set can be unknown. Where other automations require the data set to be known, with known inputs and expected outputs, certain examples process unknown data to transform the data into a set of visual representations 720-728 forming a block 710 characterizing the patient.

FIG. 8 depicts an example interface 800 including representations 810, 820 of a plurality of patients in a healthcare environment (e.g., a hospital, a ward, a department, a clinic, a doctor's office, etc.). As shown in the example of FIG. 8, each cluster or block 810, 820 corresponds to a patient, and each strip/stripe/bar/band/line/segment 812-818, 822-828 in the respective block 810, 820 represents one variable depicted in a normalized color, pattern, and/or other texture format for the corresponding signal. As such, the set of blocks 810, 820 form a “Christmas tree” 830 of colors/patterns/textures providing a visual indication of patient condition, trend, pattern, etc. In certain examples, each strip 812-818, 822-828 serves as a pointer to underlying 1D data and/or associated records, actions, etc., and each block 810, 820 provides a snapshot of patient condition.

As shown in the example of FIG. 8, a position of each block 810, 820 can be anchored with respect to an identified start or reference event 840 (e.g., indicated by a line 840 in the example interface 800 of FIG. 8) to expose variation between patients with respect to that event 840. In certain examples, patient blocks 810, 820 can be ordered in the tree 830 according to one or more criterion/characteristic (e.g., location, duration, condition, demographic, etc.).

For example, a subset of patient data (e.g., less than ten minutes, etc.) can be removed for each patient case. In certain examples, the top rows 812-814, 822-824 (e.g., 14 rows, etc.) for each block 810, 820 are categorical and the bottom rows 816-818, 826-828 (e.g., 29 rows, etc.) in each block 810, 820 are numeric. The blocks 810, 820 are anchored by the emergence start event 840 and sorted by length of case, for example.

In certain examples, one or more patients can be excluded from a “ground truth” set of patient data to be used to train one or more AI models. For example, one or more blocks 810, 820 that do not align with other blocks 810, 820 with respect to the event 840 can be excluded from the ground truth data set provided for AI model training and/or testing. Remaining blocks 810, 820 can be annotated for training, testing, patient evaluation, etc. For example, a clinician, a nurse, etc., can annotate the “clean” data to form a training and/or testing data set.

In certain examples, the blocks 810, 820 can represent 1D data associated with different patients. In other examples, the block 810, 820 can represent 1D data associated with the same patient acquired at different times. The event 840 is used, for example, to organize patient according to group, location, duration, clinical purpose, etc.

In certain examples, the individual “tree” interface 830 can be arranged with a plurality of other tree interfaces to form a “forest” interface. FIG. 9 illustrates an example “forest” or combined interface 900 including a plurality of individual tree interfaces 830, 910, 920. Via the example composite interface 900, a collection of individual interfaces 830, 910, 920 can be compiled to represent a plurality of departments, groups, points in time, instances, etc., of patients in care of a healthcare provider. The forest 900 of trees 830, 910, 920 can be arranged for comparison and interaction according to one or more criterion. For example, the composite interface 900 can highlight variability and can pivot on different characteristics, sort on different sizes, etc. Interaction (e.g., zoom, drill-down, process, etc.) with displayed information can be enabled via the interface 900 and/or its component trees 830, 910, 920 of blocks, for example.

As such, certain examples provide micro and macro views of multiple patients with respect to multiple variables. For example, given a single variable (e.g., oxygen level below a threshold percentage, etc.) a quick view of applicable patients can be shown along with a time stamp of when a measured value of the variable dropped below (or rose above) a threshold level for the variable. A quick analysis can be conducted with respect to other variables at that time to determine a correlation between the change in one variable with respect to the threshold and change(s) to other variable(s) in the block(s) of patient data.

In certain examples, an interface begins with the composite view 900 (e.g., a static image, a dynamic interactive graphic, etc.) across multiple groups/facilities/locations/enterprises, and the system can focus on a particular tree 830, 910, 920 of patient/location data. Within a selected tree 830, a portion of the tree can be displayed in its interface 800, such as by magnifying and displaying real captured data signals in the magnified region, block, etc., 810, 820. Another level of magnification can provide access to underlying signal data, etc., for example. Blocks 810, 820 in the tree 830 can be ordered based on duration, procedure, condition, location, etc., and patients are then organized differently within the graphical user interface 800. Similarly, segments 812-818, 822-828 in a block 810, 820 can be ordered based on one or more criterion such as duration, procedure, condition, location, demographic, etc., and patient segments 812-818, 822-828 in the block 810, 820 are then organized for display and interaction according to the criterion(-ia). Using the “Christmas tree” interface 800, for example, a view of related patients can be provided to enable proper data clean up decisions as a group before diving into the details of particular patients, issues, procedures, etc. The event indicator 840 can be used as a reference point to align the blocks 810, 820 of data for each patient in the data set to show an event that occurred at that point in time, when in time the particular event occurred for each patient, what was occurring with other patients when a particular patient experienced the event, and/or other comparative visualization and/or analysis, for example.

As shown in the example of FIG. 9, groups of patients can be represented with respect to a particular event 840 (e.g., a particular group, location, duration, clinical purpose, condition, other clinical event, etc.) in one or more trees 830, 910, 920. Stacked signals form a representation of a patient, and patient representations can be organized with respect to each other based on the event and/or other criterion 840, for example. The event/criterion 840 allows the same set of patient data to be “stacked” or organized in different ways, for example. For example, the trees 830, 910, 920 can be formed from different patient data sets, and/or the trees 830, 910, 920 can be formed from the same patient data set. The event/criterion indicator 840, 915, 925 can represent a same event/criterion across different sets of patient data and/or can represent a changing event/criterion across the same set of patient data. As such, each event 840, 915, 925 triggers a different organization of the same patients in the corresponding tree 830, 910, 920, for example. Each different event 840, 915, 925 results in a different tree 830, 910, 920 with different patient outliers. Thus, when training an AI model to recognize a particular event 840, 915, 925, a different set of ground truth patient data can be identified and stored with outliers removed, for example.

Thus, different groups of patient data can be formed, processed, transformed into visualizations, and analyzed to determine patient patterns, trends, issues, and appropriate data for AI model training, for example. Based on issue, condition, and/or purpose, for example, patients can be arranged in different groups to treat each group of patients separately. For example, patients in cardiology can form one group, and patients with broken bones can form another group. By processing and transforming, without prior knowledge, the data for a particular group and event/criterion 840, 915, 925 into a visualization and grouping, common features can be understood for a particular group, and outliers can be investigated, eliminated, etc.

For example, a group of patients can be analyzed with respect to an anesthesia event 840, 915, 925. The event 840, 915, 925 can be an anesthesia “on” event or an anesthesia “off” event, for example. With an anesthesia “off” event, a goal is to determine an end to a procedure, so patients can be taken off their anesthesia and moved from a surgical suite to a post-operative recovery area. From the tree 830, 910, 920 view, patients undergoing the same procedure can be compared based on the anesthesia off trigger event 840, 915, 925, for example. Alternatively or in addition, the same patient undergoing a procedure multiple times or undergoing different procedures with a same trigger event 840, 915, 925 can be visually compared. When patients are organized with respect to the same event 840, 915, 925 such as removal of anesthesia, their procedure duration, responsiveness, and/or other characteristic can be evaluated. Based on the evaluation, such patient data can be used to form ground truth or known, verified patient data to be relied upon for training and/or testing an AI model. Patterns or trends in the data can also be analyzed for cause and effect and associated adjustment to patient diagnosis, treatment, etc. Patients not following a pattern (e.g., outliers or anomalies, etc.) can be discarded or ignored for the training/test data set, for example.

In certain examples, using the “forest” interface 900 of FIG. 9, a group and/or subgroup of patients can be selected to trigger extraction of a data set to output for training, testing, etc., of one or more AI models. Selection of a subset of a tree 830, 910, 920 via the interface 900 can trigger extraction and transmission (e.g., to be stored, to be used by a model generator/processor, etc.) of the data set associated with the subset, for example.

In certain examples, the data trees can be used to identify and evaluate individual patient information as well as determine group characteristics as with the example interfaces 800, 900. As such, a user can formulate a reliable data set for training and/or testing of an AI model and also leverage the data as actionable data for patient diagnosis, treatment, etc.

FIGS. 10A-10E illustrate a sequence of user interface screens corresponding to an example workflow for anomaly detection in patient data. As shown in the example of FIG. 10A, a multi-patient view interface 1000 provides representations 1010-1020 for a plurality of patients dynamically showing associated vitals and/or other physiological data (e.g., heart rate, blood pressure, oxygen saturation, etc.) including one or more warnings 1030, 1032, where applicable, for the respective patient. For example, the multi-patient view 1000 shows a real-time (or substantially real time given memory and/or processor latency, data transmission time, etc.) digest of physiological signals recorded over a period of time (e.g., the last five minutes, last ten minutes, last minute, etc.) for multiple patients. The patients shown in the multi-patient view 1000 can be associated with the patient representations shown in a tree 830, 910, 920, for example.

Using the example interface 1000, a patient representation 1010-1020 can be selected to trigger an expanded single-patient view 1040, such as shown in the example of FIG. 10B, showing an expanded view of the representation 1020 for the selected patient. For example, a doctor can click one of the displayed patient representations 1010-1020 to see more real-time signals from that patient in the single patient view 1040 of the example of FIG. 10B. The signals can convey phases of a patient's care such as inductance, maintenance, and emergence phases of the patient's anesthesia, for example.

Whereas the multi-patient view 1000 may have a prioritized patient 1020, the single-patient view 1040 can include a prioritized event 1042. The example single-patient view 1040 can also include a button, icon, or other trigger 1045 to view a patient history for the patient displayed in the single view interface 1040. By clicking on the history data button 1045 in the single-patient view 1040, collected physiological signals for the patient over a given interval (e.g., in the past hour, the past 5 hours, the past 8 hours, etc.) is displayed. An example patient history view 1050, such as shown in the example of FIG. 10C, provides a holistic, qualitative graphical visualization of the collected patient waveform data over the designated time period (e.g., set by user response, set by preference, set by default, set by data availability, etc.). Thus, rather than looking at numbers or looking at particular waveforms, one or more AI constructs (e.g., hybrid RL, DL, DL+Hybrid RL, etc.) can process the 1D time series waveform data to formulate a block 1055 of visual values 1060-1068 for display. This view helps identify and highlight anomaly conditions detected by the AI clinical detection models. In the example of FIG. 10C, the patient was detected and highlighted to have both sleep apnea 1070 and seizure 1072 as demonstrated by the anomaly or change 1070, 1072 in the value of the respective signal 1060-1068.

The example interface of FIG. 10C transforms data into visual representations over a certain period of time, such as morning, afternoon, overnight, etc. Signal acquisition and transformation can be repeated at a different time of day, different day, same day of the week but a week later, etc., to provide a plurality of visual representations for comparison. The representations can be compared for the same patient, different patients undergoing the same procedure, etc. The representations can be stacked to form a tree 830, 910, 920, for example.

Selecting the indication of seizure 1072 triggers display of an example interface 1080, shown in FIG.10D, to provide further detail regarding the event/anomaly 1072 in the patient data stripe 1068. In the example of FIG. 10D, the anomaly 1072 is a seizure with respect to a patient, and the detail interface view 1080 displays the waveform data associated with the anomaly 1072 represented in the processed patient data stripe 1068.

FIG. 10E provides an example of an example graphical user interface 1090 providing a probability of seizure at a certain power over a period of time. As such, a user can trigger processing of the waveform from the interface 1080 of FIG. 10D to generate a results interface 1090 providing an analysis of the processed waveform data. In certain examples, the results can be interactive to drive detection, prediction, evaluation of causation, confidence score, etc.

Thus, the example of FIGS. 10A-10E illustrates a new, interactive, dynamic user interface to allow correlation, processing, and viewing of a plurality of sets of patient data, focus on one set of patient data, concentration on a subset of such patients, in depth review of a particular patient, and deep dive into source 1D data and associated analysis. In certain examples, the series of interfaces 1000, 1040, can replace the prior interface upon opening, pop-up and/or otherwise overlay the prior interface upon opening, etc. The interface allows a patient and/or group of patients to be analyzed, diagnosed, treated, etc., and also facilitates transformation of gathered patient data into a verified data set for training, testing, etc., of AI model(s), for example.

FIG. 11 illustrates an example time series data visualization system or apparatus 1100. The example system 1100 can be used to process 1D time series data from one or more patients to generate interactive visualization interfaces, such as the example interfaces of FIGS. 6-10E. The example system 1100 includes a communication interface 1110, an input processor 1120, a data processor 1130, a model builder 1140, a model deployer 1150, a visualization processor 1160, a user interface builder 1170, and an interaction processor 1180. The example system 1100 transforms data gathered from one or more medical devices, patient monitors, etc., into interactive graphical representations that provide a visual indication of content, status, severity, relevance, etc. The example system 1100 enables a new form of display and interaction with the interactive graphical representations and underlying time series data via a graphical user interface to manipulate the graphical representations individually, in blocks or clusters, with respect to multiple patients, with respect to a reference event, etc.

The example communication interface 1110 is to send and receive data to/from one or more sources such as sensors, other monitoring devices, medical devices, other machines, information systems, imaging systems, archives, etc. The example input processor 1120 is to clean (e.g., remove outlier data, interpolate missing data, adjust data format, etc.), normalize (e.g., with respect to a normal value, reference value, standard value, threshold, etc.) and/or otherwise process incoming data (e.g., monitored patient physiological data, logged machine data, electronic medical record data, etc.) for further processing by the system 1100.

The example data processor 1130 processes the normalized and/or otherwise preprocessed data from the input processor 1120 to complete the normalization of data begun by the input processor, compare data provided by the input processor 1120 and/or directly from the communication interface 1110, prepare data for modeling (e.g., for training and/or testing a machine learning model, for visualization, for computer-aided diagnosis and/or detection, etc.), etc. In certain examples, the data processor 1130 can process data to convert the data into a graphical representation of relative or normalized values over time for a parameter or characteristic associated with the data (e.g., associated with a stream of 1D time series data, etc.). In other examples, the visualization processor 1160 converts the data into one or more graphical representations for visual review, comparison, interaction, etc.

The example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model, etc.) using data from the communication interface 1110, input processor 1120, and/or data processor 1130. For example, the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model. The example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.

In certain examples, the visualization processor 1160 converts one-dimensional time-series data into one or more graphical representations for visual review, comparison, interaction, etc. In other examples, the visualization processor 1160 organizes and correlates graphical representations with respect to a patient, a reference/emergency/triggering event, etc. The example visualization processor 1160 can be used to process the graphical representations of one or more data series (e.g., 1D time series data, other waveform data, other data, etc.) into one or more visual constructs such as blocks/clusters 810, 820, strips/bands/lines/segments 812-818, etc. The example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc.

The example user interface builder 1170 can construct an interactive graphical user interface from the graphical representations, model, and/or other data available in the system 1100. For example, the interface builder 1170 can generate one or more interfaces such as in the examples of FIGS. 6-10E and can generate a linked combination of interfaces such as shown in the example of FIGS. 10A-10E. The example interaction processor 1180 triggers user interface displays, data manipulation, graphical representation manipulation, processing of data, access to external system(s)/process(es), data transfer, storage, reporting, etc., via the one or more interfaces 700-1080 such as shown in the examples of FIGS. 6-10E.

FIG. 12 is a flow diagram of an example method 1200 to process 1D time series data. At block 1202, raw time series data is processed. For example, 1D waveform data from one or more sensor attached to and/or otherwise monitoring a patient, a medical device, other equipment, a healthcare environment, etc., can be processed by the example input processor 1120 to identify the data (e.g., type of data, format of data, source of data, etc.) and route the data appropriately.

At block 1204, a processing method to be applied to the data is determined. The processing method can be dynamically determined by the data processor 1130 based on the type of the data, source of the data, reason for exam, patient status, type of patient, associated healthcare professional, associated healthcare environment, etc. The processing method can be a bottom-up processing method or a top-down processing method, for example. When the processing method is to be a bottom-up processing method, at block 1206, the data is cleaned. For example, the data can be cleaned by the data processor 1130 to normalize the data with respect to other data and/or a reference/standard value. The data can be cleaned by the data processor 1130 to interpolate missing data in the time series, for example. The data can be cleaned by the data processor 1130 to adjust a format of the data, for example. At block 1208, outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) from the data being processed.

At block 1210, a model is built using the data. For example, the example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data from the communication interface 1110, input processor 1120, and/or data processor 1130. For example, the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model.

At block 1212, the model is deployed. For example, the example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.

At block 1214, feedback is captured from use of the deployed model. For example, feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.

When the processing method is to be a top-down processing method, at block 1216, the data is visualized. For example, the example visualization processor 1160 can be used to process the data to transform the source waveform and/or other 1D time series data into graphical representations. The visualization processor 1160 can normalize and/or otherwise clean the data and transform the 1D data into one or more visual constructs such as blocks/clusters 810, 820, strips/lines/bands/segments 812-818, etc. The example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc. As such, multiple blocks for a single patient and/or blocks for multiple patients can be visualized and organized for data filtering, selection, etc. At block 1218, outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) by the data processor 1130 from the data being processed. Filtering and/or other removal of outliers can be automatic by the data processor 1130 and/or can be triggered by interaction with the interface, data visualization, etc.

At block 1220, a model is built using the data. For example, the example model builder 1140 builds a model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data and associated graphical representations to cluster representations for a patient, group patients together in relative alignment around a trigger event (e.g., an emergency condition, an anomaly, a particular physiological value, etc.). The model can thereby learn how and when to group similar or dissimilar graphical representations, highlight anomalies in a visual manner, etc.

At block 1222, the model is deployed. For example, the example model deployer 1150 can deploy an executable model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, comparatively organize graphical representations according to one or more criteria, etc. As such, a graphical visualization can be generated from an output of the model. The model can be used to output prediction and/or detection results based on time-series data, and the output can be visualized graphically such as using the visualization processor 1160.

At block 1214, feedback is captured from use of the deployed model. For example, feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.

FIG. 13 is a flow diagram of an example method 1300 for dynamic generation and manipulation of a graphical user interface including visual, graphical representations of one-dimensional time-series data. At block 1302, time-series data is processed to normalize the data with respect to one or more reference values. For example, value(s) of the time-series data waveforms and/or other one-dimensional data stream can be adjusted (e.g., normalized) with respect to a reference value such as a normal value, a standard value, an accepted average value, an expected value, etc. The normalized data then expresses a degree or magnitude of difference from the reference value(s), which enables improved comparison of values, triggering of alerts, highlighting of anomalies, etc.

At block 1304, the normalized data is converted into one or more graphical representations of the underlying normalized 1D data. For example, normalized 1D time series data values can be provided to a deep learning model, such as an RL model, DL model, hybrid RL+DL model, etc., to convert the numerical value into a visual, graphical representation such as a line, strip, stripe, segment, bar, or band. For example, normalized heart rate waveform data can be fed into a hybrid RL+DL model to form a contiguous bar or strip graphical representation showing a trend, relative importance, anomaly, etc., in the underlying heart rate waveform data. A set of waveform data for a patient can be converted into a plurality of graphical representations (e.g., heart rate, blood pressure, lung volume, brain activity, etc.), for example. As such, normalized data is converted into a comparative visual representation based on a color, shading, texture, pattern, etc.

At block 1306, graphical representations are clustered for a given patient. For example, graphical representations of heart rate, blood pressure, brain wave activity, lung activity, etc., can be gathered together or clustered to be represented as a block of graphical representations for the patient. At block 1308, patient clusters are arranged with respect to a reference event. For example, a reference event, such as a stroke, seizure, fire, etc., can be used to align a plurality of patient clusters for visual comparison as to a point in the collection of data corresponding to the graphical representation at which the reference event occurred.

At block 1310, the arranged clusters/blocks are displayed via a graphical user interface. For example, as shown in the example interfaces of FIGS. 7-10E, blocks of graphical representations are displayed via the user interface for interaction alone, in conjunction with a reference event, in comparison with other blocks/clusters, etc. At block 1312, interaction with the blocks and constituent lines of graphical representation is facilitated via the graphical user interface. For example, a patient cluster or block can be selected for further review/interaction. An individual line of graphical representation can be selected for further review/interaction. For example, multiple blocks for a single patient can be selected and/or blocks representing multiple patients can be selected. An anomaly within a graphical representation of particular 1D data can be selected for review of/interaction with underlying 1D time series data, for example. In certain examples, all or some of the displayed representations can be selected to trigger generation of a data set for training and/or testing of one or more AI models.

At block 1314, an action is triggered with respect to underlying data based on the interaction with the graphical representation(s) of the user interface displayed. For example, associated time series data can be processed, combined with other 1D data, transmitted to another process/system, stored/reported in an electronic medical record, converted to an order (e.g., for labs, monitoring, etc.), etc. In certain examples, graphical representations selected to form a data set for training and/or testing of one or more AI models can be annotated via interaction to form a “ground truth” data set for model training, testing, etc. At block 1316, the user interface is updated based on interaction, triggered action, etc. For example, a change in the data, combination of data, further physiological and/or device monitoring, etc., can result in a change in graphical representation, an addition or subtraction of graphical representation, highlighting of an anomaly, identification of a correlation, etc., updated and displayed via the graphical user interface.

FIG. 14 is a flow diagram of an example method 1400 to facilitate interaction with graphical representations arranged and displayed via a graphical user interface (e.g., block 1312 of the example of FIG. 13). At block 1402, input with respect to the graphical user interface is processed. The input (e.g., user selection, program execution, access by another system or device, etc.) can trigger interaction with one or more elements of the graphical user interface.

At block 1404, interaction with a patient cluster of graphical representations is enabled. For example, a user and/or other program, device, system, etc., can interact with a patient cluster or block 810, 820. The block 810, 820 can be analyzed as a group or set of individual graphical representation lines/strips 812-818, 822-828 to determine pattern(s) for a patient, compare patients, reorder and/or otherwise adjust comparative positioning of patient blocks 810, 820, etc. For example, patient blocks 810, 820 can be positioned adjacent to each other to trigger a comparison of values. A reference or triggering event 840 can be activated with respect to patient blocks 810, 820 to triggered automated alignment of the blocks 810, 820 with respect to the event indicator 840.

At block 1406, interaction with a graphical representation is enabled. For example, a user and/or other program, device, system, etc., can interact with a strip 812-818, 822-828 to drill down to underlying data (e.g., as shown in the examples of FIGS. 6, 10D, 10E, etc.). Strips 812-818 can be selected for grouping into a data set for annotation and AI model training and/or testing, etc., for example. At block 1408, interaction with an anomaly in a graphical representation is enabled. For example, a user and/or other program, device, system, etc., can select an anomaly (e.g., the anomaly 1072 in the strip 1068) to view underlying signal data (e.g., as shown in the example of FIG. 10D), trigger analytics processing with respect to the selected anomaly (e.g., as shown in the example of FIG. 10E), etc. Alternatively or in addition, an anomaly or outlier can be excluded from a data set to be formed for AI model training, testing, etc.

At block 1410, interaction with the blocks, graphical representation elements, anomaly, etc., is processed. For example, additional data, underlying detail, application execution, rearrangement of elements on the graphical user interface, etc., can be processed based on the interaction at block 1404, 1406, and/or 1408. Control then reverts to block 1314 to trigger action with respect to underlying data based on the interaction.

Thus, certain examples provide a variety of displays and associated interactions to drive information retrieval, analysis, combination, correlation, patient care, and other healthcare workflows. In brief, as disclosed and described herein, it is envisioned that a graphical user interface can transition from any interface shown in FIGS. 5-10E to any other interface shown in FIGS. 5-10E.

For example, navigation can begin with the multi-patient view of FIG. 10A, from which a single patient can be selected to access the single patient view of FIG. 10B. In the single patient view, demographic data, historic information, vitals, captured signal data, etc., can be displayed. From the single patient view, a block graphical representation (e.g., FIG. 10C) can be displayed to visualize collected 1D signal data of the tree in a holistic, “block” graphical representation format for analysis, selection for AI model training/testing, etc. From the block representation of the single patient, a graphical representation line or band within the block can be selected to show the underlying signal data used to form the graphical representation (10D). Alternatively or in addition, the multi-patient tree representational view of FIGS. 8 and/or 9 can be triggered by interaction with the block to show the patient's representation in comparison to graphical representations of other patients, for example.

In another example, navigation begins with a multi-patient graphical representation such as the tree of FIG. 8, the forest of FIG. 9, etc. Selection of a block within the multi-patient graphical representation transforms the display to a single patient representation of the associated block such as shown in the example of FIG. 7. From the single block, an individual graphical representation can be selected to display the underlying 1D signal data forming the graphical representation (e.g., FIG. 10D). Alternatively or in addition, selection of the block can trigger generation of a single-patient view, such as the single patient interface view of FIG. 10B, to show information for the patient including signal waveforms forming the graphical representations of the block, for example.

Other variations of graphical user interface transformation are envisioned, such as beginning with a multi-patient tree representation of FIGS. 8 and/or 9 and interacting with one or more blocks of the tree to transform the interface into the multi-patient view of FIG. 10A. From the multi-patient view, the single patient view of FIG. 10B can be selected, and interaction with signal values, etc., in the single patient view can trigger display of the single patient representation of FIG. 7 and/or back to the multi-patient representation of FIGS. 8 and/or 9.

While example implementations are disclosed and described herein, processes and/or devices disclosed and described herein can be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, components disclosed and described herein can be implemented by hardware, machine readable instructions, software, firmware and/or any combination of hardware, machine readable instructions, software and/or firmware. Thus, for example, components disclosed and described herein can be implemented by analog and/or digital circuit(s), logic circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the components is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware.

Flowcharts representative of example machine readable instructions for implementing components are disclosed and described herein. In the examples, the machine readable instructions include a program for execution by a processor. The program may be embodied in machine readable instructions stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to flowchart(s), many other methods of implementing the components disclosed and described herein may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Although the flowchart(s) depict example operations in an illustrated order, these operations are not exhaustive and are not limited to the illustrated order. In addition, various changes and modifications may be made by one skilled in the art within the spirit and scope of the disclosure. For example, blocks illustrated in the flowchart may be performed in an alternative order or may be performed in parallel.

As mentioned above, the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “tangible computer readable storage medium” and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended. In addition, the term “including” is open-ended in the same manner as the term “comprising” is open-ended.

FIG. 15 is a block diagram of an example processor platform 1500 structured to execute the instructions of FIGS. 12-14 to implement, for example the example apparatus 1100 of FIG. 11. The processor platform 1500 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.

The processor platform 1500 of the illustrated example includes a processor 1512. The processor 1512 of the illustrated example is hardware. For example, the processor 1512 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor 1512 implements the example apparatus 1100 but can also be used to implement other systems disclosed herein such as systems 100, 200, 300, 400, etc.

The processor 1512 of the illustrated example includes a local memory 1513 (e.g., a cache). The processor 1512 of the illustrated example is in communication with a main memory including a volatile memory 1514 and a non-volatile memory 1516 via a bus 1518. The volatile memory 1514 may be implemented by SDRAM, DRAM, RDRAM®, and/or any other type of random access memory device. The non-volatile memory 1516 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1514, 1516 is controlled by a memory controller.

The processor platform 1500 of the illustrated example also includes an interface circuit 1520. The interface circuit 1520 may be implemented by any type of interface standard, such as an Ethernet interface, a USB, a Bluetooth® interface, an NFC interface, and/or a PCI express interface.

In the illustrated example, one or more input devices 1522 are connected to the interface circuit 1520. The input device(s) 1522 permit(s) a user to enter data and/or commands into the processor 1512. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint, and/or a voice recognition system.

One or more output devices 1524 are also connected to the interface circuit 1520 of the illustrated example. The output devices 1524 can be implemented, for example, by display devices (e.g., an LED, an OLED, an LCD, a CRT display, an IPS display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuit 1520 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor.

The interface circuit 1520 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1526. The communication can be via, for example, an Ethernet connection, a DSL connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.

The processor platform 1500 of the illustrated example also includes one or more mass storage devices 1528 for storing software and/or data. Examples of such mass storage devices 1528 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and DVD drives.

The machine executable instructions 1532 of FIGS. 12-14 may be stored in the mass storage device 1528, in the volatile memory 1514, in the non-volatile memory 1516, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.

From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that improve graphical user interface generation, configuration, interaction, and display. The disclosed apparatus, systems, methods, and articles of manufacture improve the efficiency and effectiveness of the processor system, memory, and other associated circuitry by leverage artificial intelligence models, transformations of waveform and/or other time-series data into comparative graphical representations, comparative analysis of patient data, etc. In certain examples, a deep learning model can convert one-dimensional data from monitoring of a patient, medical device(s), medical equipment, information system(s), etc., into a comparative graphical representation, such as a gradient-based graphical representation visually indicating a change in value over time for the respective data source/value. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer and/or other processor and its associated interface. The apparatus, methods, systems, instructions, and media disclosed herein are not implementable in a human mind and are not able to be manually implemented by a human user.

Although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.

Claims

1. A time series data visualization apparatus comprising:

a data processor to process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
a visualization processor to transform the processed data into a plurality of graphical representations visually indicating a change over time in the data and to cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion;
an interface builder to construct a graphical user interface to display the at least first and second blocks of graphical representations; and
an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

2. The apparatus of claim 1, wherein each graphical representation is displayed as a bar using at least one of a color, a pattern, a texture, or a gradient.

3. The apparatus of claim 1, wherein the one-dimensional data is to be captured from at least one of a sensor monitoring a physiological signal of the patient or a medical device operating with respect to a patient.

4. The apparatus of claim 1, wherein the interaction to extract the data set for processing is to include selecting at least a subset of the first and second blocks for at least one of training, testing, or validation of an artificial intelligence model.

5. The apparatus of claim 1, wherein selection of the first block is to trigger display of a single patient view including one or more waveform signals associated with the first block.

6. The apparatus of claim 1, wherein the processing of the extracted data set is to include analyzing a pattern of data for one or more patients associated with the extracted data set.

7. The apparatus of claim 1, wherein the indicator of the criterion includes a visual indication of an event, and wherein the first block and the second block represent at least one of a) two occurrences of the event for one patient or b) one occurrence of the event for two patients.

8. The apparatus of claim 1, wherein the indicator is a first indicator and the criterion is a first criterion, and wherein the first block and the second block arranged with respect to the first indicator of the first criterion form a first tree representation, the first tree displayed via the graphical user interface with a second tree, the second tree including the first block and the second block arranged with respect to a second indicator of a second criterion.

9. The apparatus of claim 1, wherein the interface builder is to build a multi-patient view to be displayed via the graphical user interface, wherein the interaction processor is to facilitate selection of a patient within the multi-patient view to trigger a single-patient view displayed by the interface builder via the graphical user interface, wherein the interaction processor is to facilitate interaction with the single-patient view to trigger display of the first block from the single-patient view via the graphical user interface and to facilitate selection of a first graphical representation within the first block to display, via the graphical user interface, the one-dimensional data associated with the first graphical representation.

10. The apparatus of claim 1, wherein the interaction processor is to facilitate selection of a patient within a multi-patient interface view to trigger display of the first block via the graphical user interface by the interface builder, wherein the interaction processor is to facilitate selection of the first block via the graphical user interface to trigger display, by the interface builder via the graphical user interface, of a single-patient view including one-dimensional data associated with the first block.

11. At least one tangible computer-readable storage medium comprising instructions that, when executed, cause at least one processor to at least:

process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
transform the processed data into a plurality of graphical representations visually indicating a change over time in the data;
cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion, the first block, the second block, and the indicator to be displayed via a graphical user interface; and
facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

12. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the at least one processor to display each graphical representation as a bar using at least one of a color, a pattern, a texture, or a gradient.

13. The at least one computer-readable storage medium of claim 11, wherein the one-dimensional data is to be captured from at least one of a sensor monitoring a physiological signal of the patient or a medical device operating with respect to a patient.

14. The at least one computer-readable storage medium of claim 13, wherein the interaction to extract the data set for processing is to include selecting at least a subset of the first and second blocks for at least one of training, testing, or validation of an artificial intelligence model.

15. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the processor, in response to selection of the first block, to trigger display of a single patient view including one or more waveform signals associated with the first block.

16. The at least one computer-readable storage medium of claim 11, wherein the processing of the extracted data set is to include analyzing a pattern of data for one or more patients associated with the extracted data set.

17. The at least one computer-readable storage medium of claim 11, wherein the indicator of the criterion includes a visual indication of an event, and wherein the first block and the second block represent at least one of a) two occurrences of the event for one patient or b) one occurrence of the event for two patients.

18. The at least one computer-readable storage medium of claim 11, wherein the indicator is a first indicator and the criterion is a first criterion, and wherein the instructions, when executed, cause the at least one processor to arrange the first block and the second block with respect to a first indicator of the first criterion to form a first tree representation, the first tree to be displayed via the graphical user interface with a second tree, the second tree including the first block and the second block arranged with respect to a second indicator of a second criterion.

19. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the at least one processor to:

displaying, based on selection of the first block, the first block via the graphical user interface; and
displaying, based on selection of the first block via the graphical user interface, a single-patient view including one-dimensional data associated with the first block.

20. A computer-implemented method for medical machine time-series event data processing and visualization, the method comprising:

processing one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
transforming the processed data into a plurality of graphical representations visually indicating a change over time in the data;
clustering the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion; and
facilitating interaction, via a graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

21. The method of claim 20, further including:

displaying, based on selection of the first block, the first block via the graphical user interface; and
displaying, based on selection of the first block via the graphical user interface, a single-patient view including one-dimensional data associated with the first block.
Patent History
Publication number: 20200342968
Type: Application
Filed: Oct 17, 2019
Publication Date: Oct 29, 2020
Inventors: Gopal B. Avinash (San Ramon, CA), Qian Zhao (San Ramon, CA), Zili Ma (San Ramon, CA), Dibyajyoti Pati (San Ramon, CA), Venkata Ratnam Saripalli (San Ramon, CA), Ravi Soni (San Ramon, CA), Jiahui Guan (San Ramon, CA), Min Zhang (San Ramon, CA)
Application Number: 16/656,034
Classifications
International Classification: G16H 15/00 (20060101); G16H 40/67 (20060101); G06F 9/451 (20060101); G06N 20/00 (20060101); G16H 10/60 (20060101);