Patents by Inventor Christian Frueh

Christian Frueh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230343010
    Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
    Type: Application
    Filed: January 29, 2021
    Publication date: October 26, 2023
    Inventors: Vivek Kwatra, Christian Frueh, Avisek Lahiri, John Lewis
  • Patent number: 10580145
    Abstract: A system and method are disclosed for motion-based feature correspondence. A method may include detecting a first motion of a first feature across two or more first frames of a first video clip captured by a first video camera and a second motion of a second feature across two or more second frames of a second video clip captured by a second video camera. The method may further include determining, based on the first motion in the first video clip and the second motion in the second video clip, that the first feature and the second feature correspond to a common entity, the first motion in the first video clip and the second motion in the second video clip corresponding to one or more common points in time in the first video clip and the second video clip.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: March 3, 2020
    Assignee: Google LLC
    Inventors: Christian Frueh, Caroline Rebecca Pantofaru
  • Patent number: 10269177
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: April 23, 2019
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20190013047
    Abstract: A plurality of videos is analyzed (in real time or after the videos are generated) to identify interesting portions of the videos. The interesting portions are identified based on one or more of the people depicted in the videos, the objects depicted in the videos, the motion of objects and/or people in the videos, and the locations where people depicted in the videos are looking. The interesting portions are combined to generate a content item.
    Type: Application
    Filed: March 31, 2015
    Publication date: January 10, 2019
    Inventors: Arthur Wait, Krishna Bharat, Caroline Rebecca Pantofaru, Christian Frueh, Matthias Grundmann, Jay Yagnik, Ryan Michael Hickman
  • Publication number: 20180101227
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180101989
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, VIvek Kwatra, Aveneesh Sud
  • Publication number: 20180101984
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Patent number: 9870621
    Abstract: A system and method are disclosed for identifying feature correspondences among a plurality of video clips of a dynamic scene. In one implementation, a computer system identifies a first feature in a first video clip of a dynamic scene that is captured by a first video camera, and a second feature in a second video clip of the dynamic scene that is captured by a second video camera. The computer system determines, based on motion in the first video clip and motion in the second video clip, that the first feature and the second feature do not correspond to a common entity.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: January 16, 2018
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Caroline Rebecca Pantofaru
  • Patent number: 9449426
    Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
    Type: Grant
    Filed: December 10, 2013
    Date of Patent: September 20, 2016
    Assignee: Google Inc.
    Inventors: Christian Frueh, Ken Conley, Sumit Jain
  • Patent number: 9445047
    Abstract: A method and system include identifying, by a processing device, at least one media clip captured by at least one camera for an event, detecting at least one human object in the at least one media clip, and calculating, by the processing device, a region in the at least one media clip containing a focus of attention of the detected human object.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: September 13, 2016
    Assignee: Google Inc.
    Inventors: Christian Frueh, Krishna Bharat, Jay Yagnik
  • Patent number: 9147279
    Abstract: Examples disclose a method and system for merging textures. The method may be executable to receive one or more images of an object, identify a texture value for a point in a first image of the one or more images, and determine a metric indicative of a relation between a view reference point vector and a normal vector of a position of a point on the object relative to the image capturing device. Based on the metrics, the method may be executable to determine a weighted average texture value to apply to a corresponding point of a three-dimensional mesh of the object.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 29, 2015
    Assignee: Google Inc.
    Inventors: James R. Bruce, Christian Frueh, Arshan Poursohi
  • Patent number: 9118843
    Abstract: Examples of methods and systems for creating swivel views from handheld video are described. In some examples, a method may be performed by a handheld device to receive or capture a video of a target object and the video may include a plurality of frames and content of the target object from a plurality of viewpoints. The device may determine one or more approximately corresponding frames of the video including content of the target object from a substantially matching viewpoint and may align the approximately corresponding frames of the video based on one or more feature points of the target object to generate an aligned video. The device may provide sampled frames from multiple viewpoints from the aligned video, configured for viewing the target object in a rotatable manner, such as in a swivel view format.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: Sergey Ioffe, Christian Frueh
  • Publication number: 20150163402
    Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
    Type: Application
    Filed: December 10, 2013
    Publication date: June 11, 2015
    Applicant: Google Inc.
    Inventors: Christian Frueh, Ken Conley, Sumit Jain
  • Publication number: 20140198178
    Abstract: Examples of methods and systems for creating swivel views from handheld video are described. In some examples, a method may be performed by a handheld device to receive or capture a video of a target object and the video may include a plurality of frames and content of the target object from a plurality of viewpoints. The device may determine one or more approximately corresponding frames of the video including content of the target object from a substantially matching viewpoint and may align the approximately corresponding frames of the video based on one or more feature points of the target object to generate an aligned video. The device may provide sampled frames from multiple viewpoints from the aligned video, configured for viewing the target object in a rotatable manner, such as in a swivel view format.
    Type: Application
    Filed: January 17, 2013
    Publication date: July 17, 2014
    Inventors: Sergey Ioffe, Christian Frueh
  • Patent number: 8466915
    Abstract: Systems and methods for fusing three-dimensional (3D) ground and airborne models for a geographical information system are described herein. A method embodiment includes aligning the ground and airborne models relative to one another, modifying three-dimensional mesh information in the ground and airborne models after the aligning to obtain modified ground and airborne models, merging the modified ground and airborne models to obtained a fused 3D model and storing the fused 3D model in memory for subsequent access by the geographical information system. A system embodiment includes a 3D model fuser configured to align the ground and airborne models relative to one another, modify three-dimensional mesh information in the ground and airborne models after the aligning to obtain modified ground and airborne models, and merge the modified ground and airborne models to obtained a fused 3D model, and a storage device that stores the fused 3D model for subsequent access by the geographical information system.
    Type: Grant
    Filed: June 15, 2010
    Date of Patent: June 18, 2013
    Assignee: Google Inc.
    Inventor: Christian Frueh
  • Patent number: 8437501
    Abstract: A system for pose generation consisting of a trajectory system, a pose generation system, an intersection extractor, an object identifier, a constraint generator, and a posegraph solver. The trajectory system identifies a number of trajectories based on input positional data of a bounded area. The pose generation system generates one or more poses based on the trajectories. The intersection extractor identifies one or more possible intersections in the one or more poses. The object identifier identifies an object pair for each possible intersection that represents two positional points at each possible intersection. The constraint generator computes and applies one or more intersection constraints to generate an energy value for each object pair based on their geometric relationship. The posegraph solver then minimizes a total energy value by modifying one or more poses and then generates an improved set of pose trajectories based on the modified one or more poses.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: May 7, 2013
    Assignee: Google Inc.
    Inventors: Dragomir Anguelov, Christian Frueh, Sameer Agarwal
  • Patent number: 8259994
    Abstract: A system for pose generation consisting of a trajectory system, a pose generation system, an intersection extractor, an object identifier, a constraint generator, and a posegraph solver. The trajectory system identifies a number of trajectories based on input positional data of a bounded area. The pose generation system generates one or more poses based on the trajectories. The intersection extractor identifies one or more possible intersections in the one or more poses. The object identifier identifies an object pair for each possible intersection that represents two positional points at each possible intersection. The constraint generator computes and applies one or more intersection constraints to generate an energy value for each object pair based on their geometric relationship. The posegraph solver then minimizes a total energy value by modifying one or more poses and then generates an improved set of pose trajectories based on the modified one or more poses.
    Type: Grant
    Filed: September 14, 2010
    Date of Patent: September 4, 2012
    Assignee: Google Inc.
    Inventors: Dragomir Anguelov, Christian Frueh, Sameer Agarwal