Patents by Inventor Aaditya Prakash

Aaditya Prakash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143921
    Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.
    Type: Application
    Filed: January 4, 2024
    Publication date: May 2, 2024
    Inventors: Vivek Varma DATLA, Sheikh Sadid AL HASAN, Aaditya PRAKASH, Oladimeji Feyisetan FARRI, Tilak Raj ARORA, Junyi LIU, Ashequl QADIR
  • Patent number: 11868720
    Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 9, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Vivek Varma Datla, Sheikh Sadid Al Hasan, Aaditya Prakash, Oladimeji Feyisetan Farri, Tilak Raj Arora, Junyi Liu, Ashequl Qadir
  • Publication number: 20230237330
    Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.
    Type: Application
    Filed: April 4, 2023
    Publication date: July 27, 2023
    Inventors: Aaditya PRAKASH, Sheikh Sadid AL HASAN, Oladimeji Feyisetan FARRI, Kathy Mi Young LEE, Vivek Varma DATLA, Ashequl QADIR, Junyi LIU
  • Patent number: 11621075
    Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: April 4, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Sheikh Sadid Al Hasan, Siyuan Zhao, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Patent number: 11620506
    Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: April 4, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Aaditya Prakash, Sheikh Sadid AL Hasan, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Varma Datla, Ashequl Qadir, Junyi Liu
  • Publication number: 20230015207
    Abstract: A system and method for unsupervised training of a text report identification machine learning model, including: labeling a first set of unlabeled text reports using a seed dictionary to identify concepts in the unlabeled text reports; inputting images associated with the first set of seed-labeled text reports into an auto-encoder to obtain an encoded first set of images; calculating a set of first correlation matrices as a dot product of the first encoded images with their associated text report features; determining a distance between the set of first correlation matrices and a filter bank value associated with the auto-encoder; identifying a first set of validated images as the images in the first set of images that have a distance less than a threshold value; and training the text report machine learning model using the labeled text reports associated with the set of first validated images.
    Type: Application
    Filed: December 18, 2020
    Publication date: January 19, 2023
    Inventors: ASHEQUL QADIR, KATHY MI YOUNG LEE, CLAIRE YUNZHU ZHAO, AADITYA PRAKASH, MINNAN XU
  • Patent number: 11544529
    Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification, and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: January 3, 2023
    Assignee: Koninklijke Philips N.V.
    Inventors: Reza Ghaeini, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Publication number: 20220108068
    Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.
    Type: Application
    Filed: January 16, 2020
    Publication date: April 7, 2022
    Inventors: VIVEK VARMA DATLA, SHEIKH SADID AL HASAN, AADITYA PRAKASH, OLADIMEJI FEYISETAN FARRI, TILAK RAJ ARORA, JUNYI LIU, ASHEQUL QADIR
  • Patent number: 11068660
    Abstract: The present disclosure pertains to a paraphrase generation system. The system comprises one or more hardware processors and/or other components. The system is configured to obtain a training corpus. The training corpus comprises language and known paraphrases of the language. The system is configured to generate, based on the training corpus, a word-level attention-based model and a character-level attention-based model. The system is configured to provide one or more candidate paraphrases of a natural language input based on both the word-level and character-level attention-based models. The word-level attention-based model is a word-level bidirectional long short term memory (LSTM) network and the character-level attention-based model is a character-level bidirectional LSTM network. The word-level and character level LSTM networks are generated based on words and characters in the training corpus.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: July 20, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Sheikh Sadid Al Hasan, Bo Liu, Oladimeji Feyisetan Farri, Junyi Liu, Aaditya Prakash
  • Patent number: 11042712
    Abstract: Techniques are described herein for training machine learning models to simplify (e.g., paraphrase) complex textual content by ensuring that the machine learning models jointly learn both semantic alignment and notions of simplicity. In various embodiments, an input textual segment having multiple tokens and being associated with a first measure of simplicity may be applied as input across a trained machine learning model to generate an output textual segment. The output textual segment may be is semantically aligned with the input textual segment and associated with a second measure of simplicity that is greater than the first measure of simplicity (e.g., a paraphrase thereof). The trained machine learning model may include an encoder portion and a decoder portion, as well as control layer(s) trained to maximize the second measure of simplicity by replacing token(s) of the input textual segment with replacement token(s).
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: June 22, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Aaditya Prakash, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Vivek Varma Datla
  • Publication number: 20200160199
    Abstract: Methods and systems for interacting with a user. Systems in accordance with various embodiments described herein provide a collection of models that are each trained to perform a specific function. These models may be categorized into static models that are trained on an existing corpus of information and dynamic models that are trained based on real-time interactions with users. Collectively, the models provide appropriate communications for a user.
    Type: Application
    Filed: July 9, 2018
    Publication date: May 21, 2020
    Inventors: SHEIKH SADID AL HASAN, OLADIMEJI FEYISETAN FARRI, AADITYA PRAKASH, VIVEK VARMA DATLA, KATHY MI YOUNG LEE, ASHEQUL QADIR, JUNYI LIU
  • Publication number: 20200042547
    Abstract: A method of producing an unsupervised constrained text simplification autoencoder including an encoder and a constrained decoder, including: encoding, by the encoder, input text to produce a code; combining a complexity parameter with the code; decoding, by constrained decoder, the combined code to produce a plurality of outputs, wherein the constrained decoder uses a dropout function to randomize the parameters of the constrained decoder; evaluating a loss function for each of the plurality of outputs, wherein the loss function is based upon the complexity parameter, indicates an achieved text simplification level, and produces an output indicating the difference between the achieved text simplification level and a desired text simplification level; and optimizing the constrained text simplification autoencoder by repeatedly evaluating the loss function for each input text in an input text training data set while varying parameters of the encoder, the parameters of the constrained decoder, and the complexity
    Type: Application
    Filed: August 2, 2019
    Publication date: February 6, 2020
    Inventors: AADITYA PRAKASH, SHEIKH SADID AL HASAN, OLADIMEJI FEYISETAN FARRI
  • Publication number: 20190370336
    Abstract: Techniques are described herein for training machine learning models to simplify (e.g., paraphrase) complex textual content by ensuring that the machine learning models jointly learn both semantic alignment and notions of simplicity. In various embodiments, an input textual segment having multiple tokens and being associated with a first measure of simplicity may be applied as input across a trained machine learning model to generate an output textual segment. The output textual segment may be is semantically aligned with the input textual segment and associated with a second measure of simplicity that is greater than the first measure of simplicity (e.g., a paraphrase thereof). The trained machine learning model may include an encoder portion and a decoder portion, as well as control layer(s) trained to maximize the second measure of simplicity by replacing token(s) of the input textual segment with replacement token(s).
    Type: Application
    Filed: June 4, 2019
    Publication date: December 5, 2019
    Inventors: Aaditya Prakash, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Vivek Varma Datla
  • Publication number: 20190221312
    Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records.
    Type: Application
    Filed: September 5, 2017
    Publication date: July 18, 2019
    Inventors: Sheikh Sadid Al Hasan, Siyuan Zhao, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Publication number: 20190205733
    Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.
    Type: Application
    Filed: September 4, 2017
    Publication date: July 4, 2019
    Inventors: Reza Ghaeini, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Publication number: 20190087721
    Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.
    Type: Application
    Filed: September 18, 2017
    Publication date: March 21, 2019
    Inventors: Aaditya Prakash, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Varma Datla, Ashequl Qadir, Junyi Liu
  • Publication number: 20190034416
    Abstract: The present disclosure pertains to a paraphrase generation system. The system comprises one or more hardware processors and/or other components. The system is configured to obtain a training corpus. The training corpus comprises language and known paraphrases of the language. The system is configured to generate, based on the training corpus, a word-level attention-based model and a character-level attention-based model. The system is configured to provide one or more candidate paraphrases of a natural language input based on both the word-level and character-level attention-based models. The word-level attention-based model is a word-level bidirectional long short term memory (LSTM) network and the character-level attention-based model is a character-level bidirectional LSTM network. The word-level and character level LSTM networks are generated based on words and characters in the training corpus.
    Type: Application
    Filed: January 23, 2017
    Publication date: January 31, 2019
    Inventors: Sheikh Sadid Al Hasan, Bo Liu, Oladimeji Feyisetan Farri, Junyi Liu, Aaditya Prakash