Patents by Inventor Suyog Dutt Jain

Suyog Dutt Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11823392
    Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: November 21, 2023
    Assignee: Board of Regents, The University of Texas System
    Inventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong
  • Publication number: 20220375102
    Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.
    Type: Application
    Filed: August 2, 2022
    Publication date: November 24, 2022
    Inventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong
  • Patent number: 11423548
    Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: August 23, 2022
    Assignee: Board of Regents, The University of Texas System
    Inventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong
  • Publication number: 20190355128
    Abstract: A method, system and computer program product for segmenting generic foreground objects in images and videos. For segmenting generic foreground objects in videos, an appearance stream of an image in a video frame is processed using a first deep neural network. Furthermore, a motion stream of an optical flow image in the video frame is processed using a second deep neural network. The appearance and motion streams are then joined to combine complementary appearance and motion information to perform segmentation of generic objects in the video frame. Generic foreground objects are segmented in images by training a convolutional deep neural network to estimate a likelihood that a pixel in an image belongs to a foreground object. After receiving the image, the likelihood that the pixel in the image is part of the foreground object as opposed to background is then determined using the trained convolutional deep neural network.
    Type: Application
    Filed: December 5, 2017
    Publication date: November 21, 2019
    Inventors: Kristen Grauman, Suyog Dutt Jain, Bo Xiong