Patents by Inventor Rupesh Kumar Srivastava

Rupesh Kumar Srivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10984320
    Abstract: A computer-based method includes receiving an input signal at a neuron in a computer-based neural network that includes a plurality of neuron layers, applying a first non-linear transform to the input signal at the neuron to produce a plain signal, and calculating a weighted sum of a first component of the input signal and the plain signal at the neuron. In a typical implementation, the first non-linear transform is a function of the first component of the input signal and at least a second component of the input signal.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 20, 2021
    Assignee: Nnaisense SA
    Inventors: Rupesh Kumar Srivastava, Klaus Greff
  • Publication number: 20210089966
    Abstract: A method, referred to herein as upside down reinforcement learning (UDRL), includes: initializing a set of parameters for a computer-based learning model; providing a command input into the computer-based learning model as part of a trial, wherein the command input calls for producing a specified reward within a specified amount of time in an environment external to the computer-based learning model; producing an output with the computer-based learning model based on the command input; and utilizing the output to cause an action in the environment external to the computer-based learning model. Typically, during training, the command inputs (e.g., “get so much desired reward within so much time,” or more complex command inputs) are retrospectively adjusted to match what was really observed.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Juergen Schmidhuber, Rupesh Kumar Srivastava
  • Patent number: 9836671
    Abstract: Disclosed herein are technologies directed to discovering semantic similarities between images and text, which can include performing image search using a textual query, performing text search using an image as a query, and/or generating captions for images using a caption generator. A semantic similarity framework can include a caption generator and can be based on a deep multimodal similar model. The deep multimodal similarity model can receive sentences and determine the relevancy of the sentences based on similarity of text vectors generated for one or more sentences to an image vector generated for an image. The text vectors and the image vector can be mapped in a semantic space, and their relevance can be determined based at least in part on the mapping. The sentence associated with the text vector determined to be the most relevant can be output as a caption for the image.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: December 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jianfeng Gao, Xiaodong He, Saurabh Gupta, Geoffrey G. Zweig, Forrest Iandola, Li Deng, Hao Fang, Margaret A. Mitchell, John C. Platt, Rupesh Kumar Srivastava
  • Publication number: 20170316308
    Abstract: A computer-based method includes receiving an input signal at a neuron in a computer-based neural network that includes a plurality of neuron layers, applying a first non-linear transform to the input signal at the neuron to produce a plain signal, and calculating a weighted sum of a first component of the input signal and the plain signal at the neuron. In a typical implementation, the first non-linear transform is a function of the first component of the input signal and at least a second component of the input signal.
    Type: Application
    Filed: May 1, 2017
    Publication date: November 2, 2017
    Inventors: Rupesh Kumar Srivastava, Klaus Greff
  • Publication number: 20170061250
    Abstract: Disclosed herein are technologies directed to discovering semantic similarities between images and text, which can include performing image search using a textual query, performing text search using an image as a query, and/or generating captions for images using a caption generator. A semantic similarity framework can include a caption generator and can be based on a deep multimodal similar model. The deep multimodal similarity model can receive sentences and determine the relevancy of the sentences based on similarity of text vectors generated for one or more sentences to an image vector generated for an image. The text vectors and the image vector can be mapped in a semantic space, and their relevance can be determined based at least in part on the mapping. The sentence associated with the text vector determined to be the most relevant can be output as a caption for the image.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 2, 2017
    Inventors: Jianfeng Gao, Xiaodong He, Saurabh Gupta, Geoffrey G. Zweig, Forrest Iandola, Li Deng, Hao Fang, Margaret A. Mitchell, John C. Platt, Rupesh Kumar Srivastava