Patents by Inventor Christian Frueh

Christian Frueh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240320892
    Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
    Type: Application
    Filed: June 5, 2024
    Publication date: September 26, 2024
    Inventors: Vivek Kwatra, Christian Frueh, Avisek Lahiri, John Lewis
  • Patent number: 12033259
    Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: July 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Vivek Kwatra, Christian Frueh, Avisek Lahiri, John Lewis
  • Publication number: 20230343010
    Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
    Type: Application
    Filed: January 29, 2021
    Publication date: October 26, 2023
    Inventors: Vivek Kwatra, Christian Frueh, Avisek Lahiri, John Lewis
  • Patent number: 10580145
    Abstract: A system and method are disclosed for motion-based feature correspondence. A method may include detecting a first motion of a first feature across two or more first frames of a first video clip captured by a first video camera and a second motion of a second feature across two or more second frames of a second video clip captured by a second video camera. The method may further include determining, based on the first motion in the first video clip and the second motion in the second video clip, that the first feature and the second feature correspond to a common entity, the first motion in the first video clip and the second motion in the second video clip corresponding to one or more common points in time in the first video clip and the second video clip.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: March 3, 2020
    Assignee: Google LLC
    Inventors: Christian Frueh, Caroline Rebecca Pantofaru
  • Patent number: 10269177
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: April 23, 2019
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20190013047
    Abstract: A plurality of videos is analyzed (in real time or after the videos are generated) to identify interesting portions of the videos. The interesting portions are identified based on one or more of the people depicted in the videos, the objects depicted in the videos, the motion of objects and/or people in the videos, and the locations where people depicted in the videos are looking. The interesting portions are combined to generate a content item.
    Type: Application
    Filed: March 31, 2015
    Publication date: January 10, 2019
    Inventors: Arthur Wait, Krishna Bharat, Caroline Rebecca Pantofaru, Christian Frueh, Matthias Grundmann, Jay Yagnik, Ryan Michael Hickman
  • Publication number: 20180101227
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Publication number: 20180101989
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, VIvek Kwatra, Aveneesh Sud
  • Publication number: 20180101984
    Abstract: A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A three-dimensional (3-D) pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. In some cases, the 3-D model of the user's face is selected from 3-D models of the user's face stored in a database that is indexed by eye gaze direction. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
    Type: Application
    Filed: June 7, 2017
    Publication date: April 12, 2018
    Inventors: Christian Frueh, Vivek Kwatra, Avneesh Sud
  • Patent number: 9870621
    Abstract: A system and method are disclosed for identifying feature correspondences among a plurality of video clips of a dynamic scene. In one implementation, a computer system identifies a first feature in a first video clip of a dynamic scene that is captured by a first video camera, and a second feature in a second video clip of the dynamic scene that is captured by a second video camera. The computer system determines, based on motion in the first video clip and motion in the second video clip, that the first feature and the second feature do not correspond to a common entity.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: January 16, 2018
    Assignee: GOOGLE LLC
    Inventors: Christian Frueh, Caroline Rebecca Pantofaru
  • Patent number: 9630379
    Abstract: Laminated composite (10) comprising at least one electronic substrate (11) and an arrangement of layers (20, 30) made up of at least a first layer (20) of a first metal and/or a first metal alloy and of a second layer (30) of a second metal and/or a second metal alloy adjacent to this first layer (20), wherein the melting temperatures of the first and second layers are different, and wherein, after a thermal treatment of the arrangement of layers (20, 30), a region with at least one intermetallic phase (40) is formed between the first layer and the second layer, wherein the first layer (20) or the second layer (30) is formed by a reaction solder which consists of a mixture of a basic solder with an AgX, CuX or NiX alloy, wherein the component X of the AgX, CuX or NiX alloy is selected from the group consisting of B, Mg, Al, Si, Ca, Se, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Ge, Y, Zr, Nb, Mo, Ag, In, Sn, Sb, Ba, Hf, Ta, W, Au, Bi, La, Ce, Pr, Nd, Gd, Dy, Sm, Er, Tb, Eu, Ho, Tm, Yb and Lu and wherein the melti
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: April 25, 2017
    Assignee: Robert Bosch GmbH
    Inventors: Thomas Kalich, Christiane Frueh, Franz Wetzl, Bernd Hohenberger, Rainer Holz, Andreas Fix, Michael Guyenot, Andrea Feiock, Michael Guenther, Martin Rittner
  • Patent number: 9449426
    Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
    Type: Grant
    Filed: December 10, 2013
    Date of Patent: September 20, 2016
    Assignee: Google Inc.
    Inventors: Christian Frueh, Ken Conley, Sumit Jain
  • Patent number: 9445047
    Abstract: A method and system include identifying, by a processing device, at least one media clip captured by at least one camera for an event, detecting at least one human object in the at least one media clip, and calculating, by the processing device, a region in the at least one media clip containing a focus of attention of the detected human object.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: September 13, 2016
    Assignee: Google Inc.
    Inventors: Christian Frueh, Krishna Bharat, Jay Yagnik
  • Publication number: 20160064350
    Abstract: A connection arrangement includes at least one electric and/or electronic component. The at least one electric and/or electronic component has at least one connection face, which is connected in a bonded manner to a join partner by means of a connection layer. The connection layer can for example be an adhesive, soldered, welded, sintered connection or another known connection that connects joining partners while forming a material connection. Furthermore, a reinforcement layer is arranged adjacent to the connection layer in a bonded manner. The reinforcement layer has a higher modulus of elasticity than the connection layer. A particularly good protective effect is achieved if the reinforcement layer is formed in a frame-like manner by an outer and an inner boundary and, at least with the outer boundary thereof, encloses the connection face of the at least one electric and/or electronic component.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 3, 2016
    Inventors: Christiane Frueh, Andreas Fix
  • Patent number: 9177934
    Abstract: The connection arrangement (100, 200, 300, 400) comprises at least one electric and/or electronic component (1). The at least one electric and/or electronic component (10) has at least one connection face (11), which is connected in a bonded manner to a join partner (40) by means of a connection layer (20). The connection layer (20) can for example be an adhesive, soldered, welded, sintered connection or another known connection that connects joining partners while forming a material connection. Furthermore, a reinforcement layer (30?) is arranged adjacent to the connection layer (20) in a bonded manner. The reinforcement layer (30?) has a higher modulus of elasticity than the connection layer (20). A particularly good protective effect is achieved if the reinforcement layer (30?) is formed in a frame-like manner by an outer and an inner boundary (36, 35) and, at least with the outer boundary (36) thereof, encloses the connection face (11) of the at least one electric and/or electronic component (10).
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: November 3, 2015
    Assignee: Robert Bosch GmbH
    Inventors: Christiane Frueh, Andreas Fix
  • Patent number: 9147279
    Abstract: Examples disclose a method and system for merging textures. The method may be executable to receive one or more images of an object, identify a texture value for a point in a first image of the one or more images, and determine a metric indicative of a relation between a view reference point vector and a normal vector of a position of a point on the object relative to the image capturing device. Based on the metrics, the method may be executable to determine a weighted average texture value to apply to a corresponding point of a three-dimensional mesh of the object.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 29, 2015
    Assignee: Google Inc.
    Inventors: James R. Bruce, Christian Frueh, Arshan Poursohi
  • Patent number: 9118843
    Abstract: Examples of methods and systems for creating swivel views from handheld video are described. In some examples, a method may be performed by a handheld device to receive or capture a video of a target object and the video may include a plurality of frames and content of the target object from a plurality of viewpoints. The device may determine one or more approximately corresponding frames of the video including content of the target object from a substantially matching viewpoint and may align the approximately corresponding frames of the video based on one or more feature points of the target object to generate an aligned video. The device may provide sampled frames from multiple viewpoints from the aligned video, configured for viewing the target object in a rotatable manner, such as in a swivel view format.
    Type: Grant
    Filed: January 17, 2013
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: Sergey Ioffe, Christian Frueh
  • Publication number: 20150163402
    Abstract: Methods and an apparatus for centering swivel views are disclosed. An example method involves a computing device identifying movement of a pixel location of a 3D object within a sequence of images. Each image of the sequence of images may correspond to a view of the 3D object from a different angular orientation. Based on the identified movement of the pixel location of the 3D object, the computing device may estimate movement parameters of at least one function that describes a location of the 3D object in an individual image. The computing device may also determine for one or more images of the sequence of images a respective modification to the image using the estimated parameters of the at least one function. And the computing device may adjust the pixel location of the 3D object within the one or more images based on the respective modification for the image.
    Type: Application
    Filed: December 10, 2013
    Publication date: June 11, 2015
    Applicant: Google Inc.
    Inventors: Christian Frueh, Ken Conley, Sumit Jain
  • Publication number: 20150123263
    Abstract: The invention relates to a method for joining a semiconductor (20) to a substrate (10), comprising the following steps: •applying a first paste layer (1) of a sintering paste to the substrate; •heating and compressing the first paste layer to form a first sintered layer; •applying a second paste layer (2) of a sintering paste to the first sintered layer and arranging a semiconductor (20) on the second paste layer; •heating and compressing the second paste layer (2) to form a second sintered layer. The invention further relates to a semiconductor component produced by means of the method.
    Type: Application
    Filed: April 2, 2013
    Publication date: May 7, 2015
    Inventors: Christiane Frueh, Michael Guenther, Thomas Herboth
  • Publication number: 20150014865
    Abstract: The connection arrangement (100, 200, 300, 400) comprises at least one electric and/or electronic component (1). The at least one electric and/or electronic component (10) has at least one connection face (11), which is connected in a bonded manner to a join partner (40) by means of a connection layer (20). The connection layer (20) can for example be an adhesive, soldered, welded, sintered connection or another known connection that connects joining partners while forming a material connection. Furthermore, a reinforcement layer (30?) is arranged adjacent to the connection layer (20) in a bonded manner. The reinforcement layer (30?) has a higher modulus of elasticity than the connection layer (20). A particularly good protective effect is achieved if the reinforcement layer (30?) is formed in a frame-like manner by an outer and an inner boundary (36, 35) and, at least with the outer boundary (36) thereof, encloses the connection face (11) of the at least one electric and/or electronic component (10).
    Type: Application
    Filed: January 25, 2013
    Publication date: January 15, 2015
    Inventors: Christiane Frueh, Andreas Fix