METHODS TO ESTIMATE EFFECTIVENESS OF A MEDICAL TREATMENT

Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for estimating treatment effect of medical treatments.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation application of U.S. application Ser. No. 16/790,216, filed on Feb. 13, 2020, which claims priority under 35 U.S.C. § 119(e) to U.S. Application Ser. No. 62/812,487, filed on Mar. 1, 2019, and claims priority to EP Application Serial No. 19305643.9, filed on May 21, 2019, the disclosures of which are incorporated herein by reference.

BACKGROUND

Responder analyses are conducted in clinical data analytics to examine the treatment effect of an investigational product or medical practice to determine how a patient responds to the treatment. A patient is considered as a responder when a response of the patient to a treatment exceeds a threshold.

SUMMARY

Implementations of the present disclosure include computer-implemented methods for estimating treatment effect of a medical treatment. The implementations make such estimation by applying a neural network model on clinical covariates (e.g., blood pressure, heart rate, body temperature, etc.) of patients, and comparing the output for the patients who have used the medical treatment to the output for the patients who have not used the medical treatment. An effectiveness of a medical treatment on a patient can be determined by a response indicator (e.g., a biomarker) that indicates how the patient's body has responded to the medical treatment.

In some implementations, the method includes: receiving, from a database, a dataset comprising: a set of covariate vectors, each covariate vector including clinical covariates of a respective patient, and a set of response indicators, each response indicator being associated with a respective covariate vector, wherein response indicators of the set of response indicators vary over a range; receiving, from the database, multiple partitions on the range of the response indicators to define a plurality of response classes for the response indicator; converting each response indicator to a respective one-hot encoded vector based on response classes indicated by the multiple partitions; training a neural network model based on each covariate vector and a respective one-hot encoded vector associated with the covariate vector, the neural network model utilizing a non-linear activation function and a loss function; and estimating a treatment effect of a medical treatment by: determining a first subset of covariate vectors for patients whom received the medical treatment, and a second subset of covariate vectors for patients whom did not receive the medical treatment, and comparing, for one or more response classes, (i) a first probability to (ii) a second probability, the first probability being a probability that covariate vectors of the first subset of covariate vectors are associated with the one or more response classes, the second probability being a probability that covariate vectors of the second subset of covariate vectors are associated with the one or more response class, wherein the first probability and the second probability are calculated by using the trained neural network model; and providing the estimated treatment effect for displayed on a graphical user interface of a computing device. Other implementations include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.

In some implementations, the method includes: receiving, from a database, a first dataset comprising: a set of covariate vectors, each covariate vector including clinical covariates of a respective patient, and a set of response indicators, each response indicator being associated with a respective covariate vector, wherein response indicators of the set of response indicators vary over a range; receiving, from the database, multiple partitions on the range of the response indicators to define a plurality of response classes for the response indicator; training a neural network model (NNM) with the first dataset to obtain a first NNM; bootstrapping the first dataset for n times to obtain n second datasets; training the NNM with each second dataset in the n second datasets to obtain n sets of second NNMs; for each covariate vector: obtaining, from the first NNM and the n second NNMs, n+1 predicted responses for the covariate vector, each predicted response being obtained by applying a respective one of the first NNM and the n second NNMs to the covariant vector, and for each response class: estimating an association probability indicating that the covariate vector is associated with the response class by: applying, for the response class, an indicator function to each of the n+1 predicted responses to obtain n+1 outputs for the response class, and calculating the association probability by normalizing an aggregation of the obtained n+1 outputs the response class; estimating a treatment effect of a medical treatment by: determining a first subset of covariate vectors for patients whom received the medical treatment, and a second subset of covariate vectors for patients whom did not receive the medical treatment, and comparing, for one or more response classes, (i) a first normalized aggregation of association probabilities for the first subset, with (ii) a second normalized aggregation of association probabilities of the second subset; and providing the estimated treatment effect for display on a graphical user interface of a computing device.

The present disclosure also provides one or more non-transitory computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.

The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.

Methods in accordance with the present disclosure may include any combination of the aspects and features described herein. That is, methods in accordance with the present disclosure are not limited to the combinations of aspects and features specifically described herein, but also include any combination of the aspects and features provided.

Among other advantages, the present implementations provide the following benefits. The methods presented herein can be used to predict effectiveness of a medical treatment on a particular patient. An accurate prediction of a treatment effect can provide significant improvement into the medical and health system, including prescribing to a particular patient medical treatments that are predicted as more effective for the particular patient, and eliminating medical treatments that seem less effective from the particular patient's prescriptions, which can result in faster recovery of the patient, fewer side effects that the patient would encounter as a result of taking less effective treatments, and lowering the treatment costs (including monetary, time, use of clinical facilities and healthcare providers).

The present implementations take a non-parametric approach by using a neural network model without making a linearity assumption for the relationship between the model's predictors and the patients' response variables. This approach is advantageous over generalized linear models that model the relationship between predictors and response variables. Such linear models can suffer from loss of power and deviation from linearity assumptions. For example, the response endpoints may not be linearly associated with the covariates of interest, or patient characteristics (e.g., covariates) may not be fully balanced. The present disclosure provides two methods to overcome such limitations. In a first method, the implementations discretize the continuous response variables into categorical variables and apply a neural network model on the categories without making a linearity assumption, and offer an automatic feature representation capability embedded in deep learning methods. A second method builds upon the first method, but does not discretize the continuous response variables. Accordingly, the present implementations improve the estimation accuracy over linear methods by avoiding making a linearity assumptions in the relationship between the predictors and the response variables.

The details of one or more implementations of the present disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the present disclosure will be apparent from the description and drawings, and from the claims.

DESCRIPTION OF DRAWINGS

FIG. 1 depicts an example environment that can be used to execute implementations of the present disclosure.

FIG. 2 depicts an example feedforward neural network model that can be used for the implementations of the present disclosure.

FIG. 3 depicts an example process that can be executed in accordance with implementations of the present disclosure.

FIGS. 4A-4B depict an example process that can be executed in accordance with implementations of the present disclosure.

FIG. 5 is a schematic illustration of example computer systems that can be used to execute implementations of the present disclosure.

Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION

Implementations of the present disclosure include computer-implemented methods for estimating treatment effect of medical treatments. The implementations also use deep learning methods to predict the probability of a patient with particular clinical covariates being a responder to a medical treatment.

FIG. 1 depicts an example environment 100 that can be used to execute implementations of the present disclosure. The environment 100 illustrates a user 116 that uses a computing device 102 to request an estimate of a treatment effect. The computing device 102 is in communication with a database 106, for example, through a network 110. The database 106 stores clinical covariate data of sample patients. The database 106 provides this data to the computing device 102. The computing device 102 uses this data to train a (deep) neural network model (NNM) and provide an estimate on the probability of effectiveness of the medical treatment (e.g., in general, or for a particular patient). Alternatively, or in addition, the database 106 can provide the data to a computing device 108 that includes one or more processors 104 to perform the estimation procedures.

The NNM can be utilizing a non-linear activation function (e.g., a softmax activation function, a rectified linear unit (ReLu) activation function) and a loss function (e.g., cross-entropy loss, least square error (L2)).

The NNM used for calculating the probability can include one or more feedforward NNMs. FIG. 2 illustrates an example feedforward NNM 200 that can be used for the implementations of the present disclosure. The NNM 200 includes multiple layers of computational units interconnected in a feedforward way. Each neuron in one layer has directed connections to the neurons of the subsequent layer. There are numerous choices of the activation function (such as sigmoid, ReLU, etc.) that links neurons in adjacent layers. Parameters of the NNM can be computed by the stochastic gradient descent algorithm. A NNM can be trained by searching for hyperparameters (e.g., number of layers, number of neurons in each layer, etc.) of the NNM based on the NNM's performance on a validation set, for example, to avoid overfitting issues and/or to find the lowest validation error.

In some implementations, only one feedforward neural network is used. Such implementations provide at least two advantages. First, the relationship among multiple outputs can be automatically handled by connections in the lower layers. Second, it is computationally simple and provides the ability to do fast hyperparameter search in a relatively small space. The NNM is denoted by f(X) herein, where X denotes a covariate vector.

FIGS. 3 and 4A-4B depict two example processes that can be executed in accordance with implementations of the present disclosure to determine an effectiveness of a medical treatment. For ease of description, the processes described herein are categorized into two methods. It would be understood by a skilled artisan that a system may benefit from either or both of these two methods.

First Method:

The first method is depicted in FIG. 3 and can be executed by one or more computing devices (e.g., the computing devices 102 or 108 in FIG. 1). The one or more computing devices receive a dataset {(xi,yi), i=1, . . . , q}, for example, from a database such as database 106 (302). The dataset includes a set of covariate vectors X and a set of response indicators Y. Each covariate vector xi includes clinical covariates of a respective patient. Each response indicator (also can be referred to as “endpoint response” or “endpoint responder”) yi is associated with a respective covariate vector xi.

The response indicators vary over a range. The range is partitioned into levels C1<C2< . . . <Ck∈supp(Y). Each Cj represents a response class. In some implementations, the one or more computing devices receives the partitions that indicate the response classes (304), for example, from the database. In some implementations, the computing device receives the partitions from an operator, for example, the user 116.

To determine efficacy of a medical treatment, probabilities of response indicators of the patients is calculated for each response class—i.e., P(Y<C1), P(Y<C2), . . . , P(Y<Ck). Efficacy of a drug or treatment can be confirmed if the treatment can result in high probabilities for one or more key response classes (for example, two). For example, a percentage change (PCHG) of a key biomarker associated with asthma can be studied to determine the effectiveness of an asthma medicine. The lower the value of the biomarker indicates a healthier condition; thus a lower PCHG indicates a better efficacy of a medical treatment on asthma. In this example, the PCHG is the response indicator. Three partitions of −50%, −25%, and 0 may be used to divide a range of variation of the response indicator into response classes C1<−50%, −50%<C2<−25%, −25%<C3<0, and C4>0. In this example, a higher probability of the PCHG falling in either classes C1 or C2 indicates a more effectiveness of the asthma medicine.

The one or more computing devices convert each of the response indicators received in 302 into a one-hot encoded vector (306) based on the response classes indicated by the partitions. (A one-hot encoded vector is a vector in which only one bit is “on,” e.g., having a value of 1 instead of 0.) In other words, the computing devices generate a zi vector for each xi covariate vector (and its respective yi response indicator) by:


zi=(I(yi≤C1),I(C1≤yi<C2), . . . ,I(Ck−1≤yi<Ck),I(yi≥Ck))  (1)

The one or more computing devices train a NNM (308) with the set of covariate vectors and their respective one-hot encoded vectors—i.e., {xi,zi}i=1, 2, . . . , q—to obtain a trained NNM model {circumflex over (f)}(x).

Applying the trained NNM to each covariate vector {xi}i=1, . . . , q, the trained NNM provides probability of the response class associated with the covariate vector—i.e.:


{circumflex over (P)}(yi<C1),{circumflex over (P)}(yi<C2), . . . ,{circumflex over (P)}(yi<Ck) for i=1,2, . . . ,n.

The computing devices estimate (310) the treatment effect of a medical treatment by applying the trained NNM on data {xi,yi}i=1, 2, . . . , q. To do so, the computing devices determine (or receive from the database) a first subset of covariate vectors for patients who received the medical treatment, and a second subset of covariate vectors for patients who did not receive the medical treatment. The computing devices then compare, for one or more response classes, (i) a first probability to (ii) a second probability. The first probability is a probability that covariate vectors of the first subset covariate vectors are associated with the one or more response classes, and the second probability is a probability that covariate vectors of the second subset of covariate vectors are associated with the one or more response class. The computing device receives the first and the second probabilities from the trained NNM. The one or more response classes (for which the first probability is compared to the second provability,) can be all of the response classes, or particular ones of the response classes (e.g., response classes C1<−50% and −50%<C2<−25% in the above example).

More particularly, considering Cj as a key response class and T as a component of X denoting treatment (e.g., T=1 denoting a treatment and T=0 denoting a no-treatment), the treatment effect is calculated by:

π ˆ C j = P ˆ ( T = 1 ) ( Y < C j ) - P ˆ ( T = 0 ) ( Y < C j ) ( 2 ) where P ˆ ( T = 1 ) ( Y < C j ) = 1 Σ I ( t i = 1 ) t i = 1 P ^ ( y i < C j ) ( 3 ) and P ˆ ( T = 0 ) ( Y < C j ) = 1 Σ I ( t i = 0 ) t i = 0 ( y i < C j ) ( 4 )

Such an aggregated estimator (see equations (3) and (4)) improves accuracy in estimating the treatment effect with reasonably large sample size. Since covariates are correlated and there is no linearity assumption in the (deep) NNM, even with complex relationships among covariates or between covariates and outputs of the NNM, observations (e.g., components of covariate vectors, response indicators) containing real intrinsic associations can be applied to obtain accurate results.

The one or more computing devices provide the estimated treatment effect for presentation. For example, the computing devices can provide the estimated treatment effect for display on a graphical user interface (312), for example, a graphical user interface of the computing device 102 in FIG. 1.

As noted above, probabilities of response indicators of the patients is calculated for each response class—i.e., P(Y<C1), P(Y<C2), . . . , P(Y<Ck). Using an ordinal classification (or partitioning) on the response indicator range can impose hard constraints on outputs of the NNM. To simplify training of the NNM, the partitioning can be performed as a nominal classification, instead of an ordinal classification. Such nominal classification can provide disjoint response classes, such as (Y<C1), (C1≤Y<C2), . . . , (Ck−1<Y<Ck), (Y≥Ck). The output of the NNM for these disjoint response classes would be in form of:


P(Y<C1),P(C1≤Y<C2), . . . ,P(Ck−1≤Y<Ck),P(Y≥Ck),

which results in k+1 components in the output. With the trained {circumflex over (f)}(x) and a particular set of covariates, the computing devices can obtain the estimated probabilities for response classes of an ordinal classification through accumulating sums:


{circumflex over (P)}(Y<C1)={circumflex over (P)}(Y<C1),


{circumflex over (P)}(Y<C2)={circumflex over (P)}(Y<C1)+{circumflex over (P)}(C1≤Y<C2),


. . .


{circumflex over (P)}(Y<Ck)={circumflex over (P)}(Y<+C1)+Σj=2k{circumflex over (P)}(Cj−1≤Y<Cj).

Such procedure provides the advantages of (i) a one-to-one transformation, with no information loss; and (ii) obtaining a standard classification problem after the transformation, which can be accurately and efficiently addressed by the deep neural network.

The first method described above can include bootstrapping the data received at 302. Bootstrapping can be beneficial when not enough sample is available to train a neural network or when more samples than the available ones are desired for training. Bootstrapping the received data includes bootstrapping (or resampling from) the set of covariate vectors and their respective response indicators. The original data and the bootstrapped data can be used for training and/or validation of the NNM.

Second Method:

The second method is depicted in FIGS. 4A-4B and can be executed by one or more computing devices (e.g., the computing devices 102 or 108 in FIG. 1). The second method eliminates converting (or transforming) the response indicators (yi) to one-hot encoder vectors (zi). Instead, a NNM is used to directly model the relationship between the covariate vectors X and the response indicators Y. As an advantage, eliminating transformation of the response indicators to encoder vectors can reduce power loss.

The second method can include two levels of bootstrapping. The first level is to estimate the probabilities of a patient (or a covariant vector associated with a patient) is a responder to a medical treatment and to calculate effectiveness of a medical treatment based on such probabilities. The second level is to provide uncertainty estimate around the estimated probabilities.

In the first level of bootstrapping, the second method bootstraps the received data for n times to obtain n sets of bootstrapped data. Each set of the bootstrapped data is used to train the NNM and obtain a respective trained NNM, fB(X). The second method then collects predicted response indicators {ŷ(bi), bi=1, . . . , n} that are predicted by the trained NNM fbi(X) for each of the covariate vectors (Xi). The method estimates an efficacy of a medical treatment based on probabilities of the predicted response indicators (for each covariate vector) being within one or more particular response classes. The following paragraphs provide a detailed description of the second method.

In the second method, the one or more computing devices (noted above,) receive a first dataset {(xi,yi), i=1, . . . , q}, for example, from a database such as database 106 (402). The first dataset includes a set of covariate vectors X and a set of response indicators Y. Each covariate vector xi includes clinical covariates of a respective patient. Each response indicator yi is associated with a respective covariate vector xi.

The response indicators vary over a range. The range is partitioned into levels C1<C2< . . . <Ck∈supp(Y). Each Cj represents a response class. In some implementations, the one or more computing devices receive the partitions that indicate the response classes (404), for example, from the database. In some implementations, the computing device receives the partitions from an operator, for example, the user 116.

The one or more computing devices train a NNM with the first dataset {(xi,yi), i=1, . . . , q} to obtain a first NNM (406). The NNM can include a feedforward neural network. The training can include training the NNM using a plurality of hyperparameters, and selecting, for the first NNM, a set of hyperparameters (from the plurality of hyperparameters) that have a lowest validation error.

The computing devices bootstrap the first dataset for n times to obtain n sets of second datasets {(xi,yi)(bp); i=1, . . . , q; p=1, . . . , n} (408). The computing devices train the NNM with each of the second datasets to obtain n sets of second NNMs (410). Training one or more of the second NNMs can be done in a process similar to the process of training the first NNM described in the preceding paragraph.

The one or more computing devices obtain n+1 predicted responses for each covariate vector of the first dataset from the trained NNMs (i.e., from the first NNM and then second NNMs) (412). Each predicted response is obtained by applying a respective one of the first NNM and the n second NNMs to the covariant vector. The obtained predicted responses for a covariate vector x can be denoted by {ŷi(bp), bp=1, . . . , n+1}.

A probability of a covariate vector being a responder to a medical treatment can indicate how likely a patient with clinical covariates similar to the covariate vector will respond or have responded to the medical treatment. The computing devices estimate probability of a covariate vector xi being a responder to a medical treatment by estimating an association probability of the covariate vector for one or more key response classes. For example, the computing devices may estimate the association probability for the covariate vector xi for each response class that was identified at 404 (414).

The association probability for a response class for a covariate vector xi can be calculated by applying an indicator function to each of the n+1 predicted responses to obtain n+1 outputs for the response class, and by normalizing an aggregation of the obtained n+1 outputs. In other words, the association probability for a covariate vector xi in all response classes Cj can be calculated by:

P ˆ ( y i < C j ) = 1 n b p = 1 n I ( y ˆ i ( b p ) < C j ) for j = 1 , , k . ( 5 )

The one or more computing devices estimate a treatment effect of a medical treatment based on association probabilities for the covariate vectors associated with patients who have received the medical treatment, to association probabilities for the covariate vectors of the patients who have not received the medical treatment. More specifically, the computing devices determine (or receive from the database) a first subset of covariate vectors for patients who received the medical treatment, and a second subset of covariate vectors for patients who did not receive the medical treatment (416). The computing devices then compare, for one or more key response classes, (i) a first normalized aggregation of association probabilities for the first subset, with (ii) a second normalized aggregation of association probabilities of the second subset (418). In other words, the computing devices estimate the treatment effect by:

π ˆ C j = 1 Σ I ( t i = 1 ) t i = 1 P ^ ( y i < C j ) - 1 Σ I ( t i = 0 ) t i = 0 P ^ ( y i < C j ) , ( 6 )

wherein ti=1 denotes treatment cases and ti=0 denotes no-treatment cases. The one or more key response classes can include all of the response classes, or particular ones.

The one or more computing devices provide the estimated treatment effect for presentation. For example, the computing devices can provide the estimated treatment effect for display on a graphical user interface (420), for example, a graphical user interface of the computing device 102 in FIG. 1.

In addition to the above-described procedures, the one or more computing devices can estimate the uncertainty of the estimated treatment effect by using the second level of bootstrapping. In the second level, the uncertainty is estimated by bootstrapping the first dataset for m times to obtain m third datasets, calculating treatment effect for each third dataset in the m third datasets to obtain m treatment effects, and calculating the uncertainty based on a distribution of the m treatment effects. Example uncertainties include, but are not limited to, a confidence interval and a standard deviation of the m treatment effects.

FIG. 5 depicts a schematic diagram of an example computing system 500. The system 500 may be used to perform the operations described with regard to one or more implementations according to any of the first or the second method of the present disclosure. For example, the system 500 may be included in any or all of the server components, or other computing device(s), discussed herein. The system 500 may include one or more processors 510, one or more memories 520, one or more storage devices 530, and one or more input/output (I/O) devices 540. The components 510, 520, 530, 540 may be interconnected using a system bus 550.

The processor 510 may be configured to execute instructions within the system 500. The processor 510 may include a single-threaded processor or a multi-threaded processor. The processor 510 may be configured to execute or otherwise process instructions stored in one or both of the memory 520 or the storage device 530. Execution of the instruction(s) may cause graphical information to be displayed or otherwise presented via a user interface on the I/O device 540.

The memory 520 may store information within the system 500. In some implementations, the memory 520 is a computer-readable medium. In some implementations, the memory 520 may include one or more volatile memory units. In some implementations, the memory 520 may include one or more non-volatile memory units.

The storage device 530 may be configured to provide mass storage for the system 500. In some implementations, the storage device 530 is a computer-readable medium. The storage device 530 may include a floppy disk device, a hard disk device, an optical disk device, a tape device, or other type of storage device. The I/O device 540 may provide I/O operations for the system 500. In some implementations, the I/O device 540 may include a keyboard, a pointing device, or other devices for data input. In some implementations, the I/O device 540 may include output devices such as a display unit for displaying graphical user interfaces or other types of user interfaces.

The features described may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus may be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device) for execution by a programmable processor; and method steps may be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features may be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that may be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.

Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer may also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, application-specific integrated circuits (ASICs).

To provide for interaction with a user, the features may be implemented on a computer having a display device such as a cathode ray tube (CRT) or liquid crystal display (LCD) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user may provide input to the computer.

The features may be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system may be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a local area network (LAN), a wide area network (WAN), and the computers and networks forming the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.

A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.

Claims

1. A non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising:

receiving, from a database, a dataset comprising: a set of covariate vectors, each covariate vector including clinical covariates of a respective patient, and a set of response indicators, each response indicator being associated with a respective covariate vector, wherein response indicators of the set of response indicators vary over a range, wherein each range includes a plurality of response classes for the respective clinical covariate, each response class being a portion of the respective range; and
estimating a treatment effect of a medical treatment by: determining a first subset of covariate vectors for patients who received the medical treatment, and a second subset of covariate vectors for patients who did not receive the medical treatment, obtaining a first normalized probability by: applying a neural network model on the first subset of covariate vectors to obtain a first set of probabilities, wherein the neural network model when applied on a covariate vector, provides probabilities of the covariate vector falling in each response classes in the plurality of response classes associated with the covariate vector, and calculating a normalized aggregation of the first set of probabilities to obtain the first normalize probability, obtaining a second normalized probability by: applying the neural network model on the second subset of covariate vectors to obtain a second set of probabilities, and calculating a normalized aggregation of the second set of probabilities to obtain the second normalize probability, and comparing, for one or more response classes, (i) the first normalized probability to (ii) the second normalized probability; and
providing the estimated treatment effect for display on a graphical user interface of a computing device.

2. The computer-readable medium of claim 1, wherein the neural network model utilizes a non-linear activation function and a loss function.

3. The computer-readable medium of claim 1, wherein in estimating the treatment effect, the one or more response classes comprises all of the plurality of response classes.

4. The computer-readable medium of claim 1, wherein at least two response classes in the plurality of response classes are disjoint.

5. The computer-readable medium of claim 1, wherein the neural network model utilizes a softmax activation function and a cross-entropy loss function.

6. The computer-readable medium of claim 1, wherein the neural network is a feedforward neural network.

7. A non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising:

receiving, from a database, a first dataset comprising: a set of covariate vectors, each covariate vector including clinical covariates of a respective patient, and a set of response indicators, each response indicator being associated with a respective covariate vector, wherein response indicators of the set of response indicators vary over a range, wherein each range includes a plurality of response classes for the respective clinical covariate, each response class being a portion of the respective range;
for each covariate vector: obtaining n+1 predicted responses for the covariate vector, each predicted response being obtained by applying a respective one of n+1 neural network models (NNMs) to the covariant vector, and for each response class, estimating an association probability indicating that the covariate vector is associated with the response class based on the n+1 predicted responses;
estimating a treatment effect of a medical treatment by: determining a first subset of covariate vectors for patients who received the medical treatment, and a second subset of covariate vectors for patients who did not receive the medical treatment, and comparing, for one or more response classes, (i) a first normalized aggregation of association probabilities for the first subset, to(ii) a second normalized aggregation of association probabilities of the second subset; and
providing the estimated treatment effect for display on a graphical user interface of a computing device.

8. The computer-readable medium of claim 7, wherein the one or more response classes comprises all of the plurality of response classes.

9. The computer-readable medium of claim 7, wherein the operations further comprise estimating an uncertainty on the treatment effect by:

bootstrapping the first dataset for m times to obtain m third datasets;
calculating treatment effect for each third dataset in the m third datasets to obtain m treatment effects; and
calculating the uncertainty based on a distribution of the m treatment effects.

10. The computer-readable medium of claim 9, wherein the uncertainty on the treatment effect includes at least one of a confidence interval and a standard deviation of the m treatment effects.

11. The computer-readable medium of claim 7, wherein the NNM utilizes a non-linear activation function and a least square error loss function.

12. The computer-readable medium of claim 11, wherein the non-linear activation function is a rectified linear unit (ReLU) function.

13. A computer-implemented method executed by one or more processors, the method comprising:

receiving, from a database, a dataset comprising: a set of covariate vectors, each covariate vector including clinical covariates of a respective patient, and a set of response indicators, each response indicator being associated with a respective covariate vector, wherein response indicators of the set of response indicators vary over a range, wherein each range includes a plurality of response classes for the respective clinical covariate, each response class being a portion of the respective range; and
estimating a treatment effect of a medical treatment by: determining a first subset of covariate vectors for patients who received the medical treatment, and a second subset of covariate vectors for patients who did not receive the medical treatment, obtaining a first normalized probability by: applying a neural network model on the first subset of covariate vectors to obtain a first set of probabilities, wherein the neural network model when applied on a covariate vector, provides probabilities of the covariate vector falling in each response classes in the plurality of response classes associated with the covariate vector, and calculating a normalized aggregation of the first set of probabilities to obtain the first normalize probability, obtaining a second normalized probability by: applying the neural network model on the second subset of covariate vectors to obtain a second set of probabilities, and calculating a normalized aggregation of the second set of probabilities to obtain the second normalize probability, and comparing, for one or more response classes, (i) the first normalized probability to (ii) the second normalized probability; and
providing the estimated treatment effect for display on a graphical user interface of a computing device.

14. The method of claim 13, wherein in estimating the treatment effect, the one or more response classes comprises all of the plurality of response classes.

15. The method of claim 13, wherein at least two response classes in the plurality of response classes are disjoint.

16. The method of claim 13, wherein the neural network model utilizes a softmax activation function and a cross-entropy loss function.

17. The method of claim 13, wherein the neural network is a feedforward neural network.

18. The method of claim 13, wherein the neural network model utilizes a non-linear activation function and a loss function.

Patent History
Publication number: 20230260606
Type: Application
Filed: Apr 27, 2023
Publication Date: Aug 17, 2023
Inventors: Qi Tang (Hopewell, NJ), Wodan Ling (New York, NY), Youran Qi (Madison, WI)
Application Number: 18/308,526
Classifications
International Classification: G16H 10/20 (20060101); G16H 10/60 (20060101); G06N 20/10 (20060101); A61B 5/00 (20060101); G06N 3/08 (20060101); G06N 5/046 (20060101); G16H 50/70 (20060101); G06F 18/2431 (20060101); G06N 3/048 (20060101);