Patents by Inventor Vivek Datla

Vivek Datla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11621075
    Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: April 4, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Sheikh Sadid Al Hasan, Siyuan Zhao, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Patent number: 11544529
    Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification, and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: January 3, 2023
    Assignee: Koninklijke Philips N.V.
    Inventors: Reza Ghaeini, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Publication number: 20190221312
    Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records.
    Type: Application
    Filed: September 5, 2017
    Publication date: July 18, 2019
    Inventors: Sheikh Sadid Al Hasan, Siyuan Zhao, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
  • Publication number: 20190205733
    Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.
    Type: Application
    Filed: September 4, 2017
    Publication date: July 4, 2019
    Inventors: Reza Ghaeini, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash