Patents by Inventor Vivek Kwatra

Vivek Kwatra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230343010
    Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
    Type: Application
    Filed: January 29, 2021
    Publication date: October 26, 2023
    Inventors: Vivek Kwatra, Christian Frueh, Avisek Lahiri, John Lewis
  • Patent number: 11556743
    Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Vivek Kwatra, Ullas Gargi, Mehmet Emre Sargin, Henry Hao Tang
  • Publication number: 20210295025
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Application
    Filed: June 4, 2021
    Publication date: September 23, 2021
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Patent number: 11042729
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: June 22, 2021
    Assignee: Google LLC
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Publication number: 20210166072
    Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 3, 2021
    Inventors: Vivek Kwatra, Ullas Gargi, Mehmet Emre Sargin, Henry Hao Tang
  • Patent number: 10867212
    Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: December 15, 2020
    Assignee: Google LLC
    Inventors: Vivek Kwatra, Ullas Gargi, Mehmet Emre Sargin, Henry Hao Tang
  • Patent number: 10514818
    Abstract: A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: December 24, 2019
    Assignee: GOOGLE LLC
    Inventors: Sergey Ioffe, Vivek Kwatra, Matthias Grundmann
  • Patent number: 10269177
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: April 23, 2019
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180350131
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for video segmentation. One of the methods includes receiving a digital video; performing hierarchical graph-based video segmentation on at least one frame of the digital video to generate a boundary representation for the at least one frame; generating a vector representation from the boundary representation for the at least one frame of the digital video, wherein generating the vector representation includes generating a polygon composed of at least three vectors, wherein each vector comprises two vertices connected by a line segment, from a boundary in the boundary representation; linking the vector representation to the at least one frame of the digital video; and storing the vector representation with the at least one frame of the digital video.
    Type: Application
    Filed: December 31, 2014
    Publication date: December 6, 2018
    Inventors: Irfan Essa, Vivek Kwatra, Matthias Grundmann
  • Publication number: 20180314881
    Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
    Type: Application
    Filed: December 5, 2017
    Publication date: November 1, 2018
    Inventors: Avneesh Sud, Steven Hickson, Vivek Kwatra, Nicholas Dufour
  • Patent number: 10061999
    Abstract: An example method is disclosed that includes identifying a training set of images, wherein each image in the training set has an identified bounding box that comprises an object class and an object location for an object in the image. The method also includes segmenting each image of the training set, wherein segments comprise sets of pixels that share visual characteristics, and wherein each segment is associated with an object class. The method further includes clustering the segments that are associated with the same object class, and generating a data structure based on the clustering, wherein entries in the data structure comprise visual characteristics for prototypical segments of objects having the object class and further comprise one or more potential bounding boxes for the objects, wherein the data structure is usable to predict bounding boxes of additional images that include an object having the object class.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: August 28, 2018
    Assignee: GOOGLE LLC
    Inventors: Vivek Kwatra, Jay Yagnik, Alexander Toshkov Toshev
  • Publication number: 20180101984
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180101989
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, VIvek Kwatra, Aveneesh Sud
  • Publication number: 20180101227
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Patent number: 9888180
    Abstract: An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: February 6, 2018
    Assignee: GOOGLE LLC
    Inventors: Matthias Grundmann, Vivek Kwatra, Irfan Essa
  • Publication number: 20170323178
    Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Inventors: Vivek Kwatra, Ullas Gargi, Mehmet Emre Sargin, Henry Hao Tang
  • Patent number: 9715641
    Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: July 25, 2017
    Assignee: Google Inc.
    Inventors: Vivek Kwatra, Ullas Gargi, Mehmet Emre Sargin, Henry Hao Tang
  • Publication number: 20170195575
    Abstract: An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
    Type: Application
    Filed: March 20, 2017
    Publication date: July 6, 2017
    Inventors: Matthias Grundmann, Vivek Kwatra, Irfan Essa
  • Patent number: 9659352
    Abstract: A method, computer program product, and computer system for identifying a first portion of a facial image in a first image, wherein the first portion includes noise. A corresponding portion of the facial image is identified in a second image, wherein the corresponding portion includes less noise than the first portion. One or more filter parameters of the first portion are determined based upon, at least in part, the first portion and the corresponding portion. At least a portion of the noise from the first portion is smoothed based upon, at least in part, the one or more filter parameters. At least a portion of face specific details from the corresponding portion is added to the first portion.
    Type: Grant
    Filed: February 5, 2015
    Date of Patent: May 23, 2017
    Assignee: Google Inc.
    Inventors: Sergey Ioffe, Troy Chinen, Vivek Kwatra, Hui Fang, Yichang Shih
  • Patent number: 9635261
    Abstract: An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: April 25, 2017
    Assignee: Google Inc.
    Inventors: Matthias Grundmann, Vivek Kwatra, Irfan Essa