Patents by Inventor Brian Amberg

Brian Amberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240005537
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates values for a representation of a face of a user. For example, an example process may include obtaining sensor data (e.g., live data) of a user, wherein the sensor data is associated with a point in time, generating a set of values representing the user based on the sensor data, and providing the set of values, where a depiction of the user at the point in time is displayed based on the set of values. In some implementations, the set of values includes depth values that define three-dimensional (3D) positions of portions of the user relative to multiple 3D positions of points of a projected surface and appearance values (e.g., color, texture, opacity, etc.) that define appearances of the portions of the user.
    Type: Application
    Filed: June 27, 2023
    Publication date: January 4, 2024
    Inventors: Brian Amberg, John S. McCarten, Nicolas V. Scapel, Peter Kaufmann, Sebastian Martin
  • Patent number: 11856203
    Abstract: Advances in deep generative models (DGM) have led to the development of neural face video compression codecs that are capable of using an order of magnitude less data than “traditional” engineered codecs. These “neural” codecs can reconstruct a target image by warping a source image to approximate the content of the target image and using a DGM to compensate for imperfections in the warped source image. The determined warping operation may be encoded and transmitted using less data (e.g., transmitting a small number of keypoints, rather than a dense flow field), leading to the bandwidth savings compared to traditional codecs. However, by relying on a single source image only, these methods can lead to inaccurate reconstructions. The techniques presented herein improve image reconstruction quality while maintaining bandwidth savings, via a combination of using multiple source images (i.e., containing multiple views of the first human subject) and novel feature aggregation techniques.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: December 26, 2023
    Assignee: Apple Inc.
    Inventors: Michael Tschannen, Ali Benlalah, Anna Volokitin, Brian Amberg, Sebastian Martin, Stefan Brugger
  • Publication number: 20230290082
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates and displays a portion of a representation of a face of a user. For example, an example process may include obtaining a first set of data corresponding to features of a face of a user in a plurality of configurations, while a user is using an electronic device, obtaining a second set of data corresponding to one or more partial views of the face from one or more image sensors, generating a representation of the face of the user based on the first set of data and the second set of data, wherein portions of the representation correspond to different confidence values, and displaying the portions of the representation based on the corresponding confidence values.
    Type: Application
    Filed: March 23, 2023
    Publication date: September 14, 2023
    Inventors: Brian Amberg, Nicolas V. Scapel, Jason D. Rickwald, Dorian D. Dargan, Gary I. Butcher, Giancarlo Yerkes, William D. Lindmeier, John S. McCarten
  • Patent number: 11379996
    Abstract: Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: July 5, 2022
    Assignee: Apple Inc.
    Inventors: Peter Kaufmann, Daniel Kurz, Brian Amberg, Yanghai Tsin
  • Patent number: 11120600
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: September 14, 2021
    Assignee: Apple Inc.
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Patent number: 11068698
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: July 20, 2021
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Publication number: 20200273180
    Abstract: Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.
    Type: Application
    Filed: November 13, 2018
    Publication date: August 27, 2020
    Inventors: Peter KAUFMANN, Daniel KURZ, Brian AMBERG, Yanghai TSIN
  • Publication number: 20200125835
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 23, 2020
    Applicant: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 10430642
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: October 1, 2019
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Publication number: 20190251728
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Application
    Filed: February 14, 2019
    Publication date: August 15, 2019
    Inventors: Justin D. STOYLES, Alexandre R. MOHA, Nicolas V. SCAPEL, Guillaume P. BARLIER, Aurelio GUZMAN, Bruno M. SOMMER, Nina DAMASKY, Thibaut WEISE, Thomas GOOSSENS, Hoan PHAM, Brian AMBERG
  • Publication number: 20190180084
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Application
    Filed: March 23, 2018
    Publication date: June 13, 2019
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 10210648
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: February 19, 2019
    Assignee: Apple Inc.
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Publication number: 20180336714
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Application
    Filed: November 10, 2017
    Publication date: November 22, 2018
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Publication number: 20180089880
    Abstract: In an embodiment a method of online video communication is disclosed. An online video communication is established between a source device and a receiving device. The source device captures a live video recording of a sending user. The captured recording is analyzed to identify one or more characteristics of the sending user. The source device then generates avatar data corresponding to the identified characteristics. The avatar data is categorized into a plurality of groups, wherein a first group of the at least two groups comprises avatar data that is more unique to the sending user. Finally, at least the first group of the plurality of groups is transmitted to the receiving device. The transmitted first group of avatar data defines, at least in part, how to animate an avatar that mimics the sending user's one or more physical characteristics.
    Type: Application
    Filed: September 22, 2017
    Publication date: March 29, 2018
    Inventors: Christopher M. Garrido, Brian Amberg, David L. Biderman, Eric L. Chien, Haitao Guo, Sarah Amsellem, Thibaut Weise, Timothy L. Bienz