Patents by Inventor Matthias Grundmann

Matthias Grundmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250138704
    Abstract: An example method includes presenting a user interface facilitating a creation of a video from an image associated with a first media item of a plurality of media items, wherein the first media item comprises the image and a video clip that are captured concurrently, receiving user input via the user interface, wherein the user input comprises a selection of a selectable control element presented in the user interface, and upon receiving the user input, presenting the video clip of the first media item in the user interface, wherein the video clip of the first media item is played in the user interface and comprises video content from before and after the image is captured.
    Type: Application
    Filed: January 6, 2025
    Publication date: May 1, 2025
    Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
  • Patent number: 12272096
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Grant
    Filed: June 15, 2023
    Date of Patent: April 8, 2025
    Assignee: GOOGLE LLC
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 12189921
    Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: January 7, 2025
    Assignee: Google LLC
    Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
  • Publication number: 20240412334
    Abstract: Systems, methods, devices, and related techniques for accelerating execution of diffusion models or of other neural networks that involve similar operations. Some aspects include accelerating inference computations in neural networks, including inference computations utilized in denoising (also referred to as “diffusion”) neural networks.
    Type: Application
    Filed: June 5, 2024
    Publication date: December 12, 2024
    Inventors: Raman Sarokin, Yu-Hui Chen, Juhyun Lee, Jiuqiang Tang, Chuo-Ling Chang, Andrei Kulik, Matthias Grundmann
  • Publication number: 20240370717
    Abstract: A method for a cross-platform distillation framework includes obtaining a plurality of training samples. The method includes generating, using a student neural network model executing on a first processing unit, a first output based on a first training sample. The method also includes generating, using a teacher neural network model executing on a second processing unit, a second output based on the first training sample. The method includes determining, based on the first output and the second output, a first loss. The method further includes adjusting, based on the first loss, one or more parameters of the student neural network model. The method includes repeating the above steps for each training sample of the plurality of training samples.
    Type: Application
    Filed: May 5, 2023
    Publication date: November 7, 2024
    Applicant: Google LLC
    Inventors: Qifei Wang, Yicheng Fan, Wei Xu, Jiayu Ye, Lu Wang, Chuo-Ling Chang, Dana Alon, Erik Nathan Vee, Hongkun Yu, Matthias Grundmann, Shanmugasundaram Ravikumar, Andrew Stephen Tomkins
  • Publication number: 20230410329
    Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.
    Type: Application
    Filed: September 1, 2023
    Publication date: December 21, 2023
    Inventors: Valentin Bazarevsky, Fan Zhang, Andrei Tkachenka, Andrei Vakunov, Matthias Grundmann
  • Publication number: 20230384911
    Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.
    Type: Application
    Filed: August 14, 2023
    Publication date: November 30, 2023
    Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
  • Publication number: 20230351724
    Abstract: The present disclosure is directed to systems and methods for performing object detection and pose estimation in 3D from 2D images. Object detection can be performed by a machine-learned model configured to determine various object properties. Implementations according to the disclosure can use these properties to estimate object pose and size.
    Type: Application
    Filed: February 18, 2020
    Publication date: November 2, 2023
    Inventors: Tingbo Hou, Adel Ahmadyan, Jianing Wei, Matthias Grundmann
  • Publication number: 20230326073
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Application
    Filed: June 15, 2023
    Publication date: October 12, 2023
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 11783496
    Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: October 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Valentin Bazarevsky, Fan Zhang, Andrei Vakunov, Andrei Tkachenka, Matthias Grundmann
  • Patent number: 11770551
    Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
  • Patent number: 11726637
    Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: August 15, 2023
    Assignee: Google LLC
    Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
  • Patent number: 11721039
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 11694087
    Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: July 4, 2023
    Assignee: GOOGLE LLC
    Inventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
  • Publication number: 20230033956
    Abstract: Example embodiments relate to estimating depth information based on iris size. A computing system may obtain an image depicting a person and determine a facial mesh for a face of the person based on features of the face. In some instances, the facial mesh includes a combination of facial landmarks and eye landmarks. As such, the computing system may estimate an iris pixel dimension of an eye based on the eye landmarks of the facial mesh and estimate a distance of the eye of the face relative to the camera based on the iris pixel dimension, a mean value iris dimension, and an intrinsic matrix of the camera. The computing system may further modify the image based on the estimated distance.
    Type: Application
    Filed: May 21, 2020
    Publication date: February 2, 2023
    Inventors: Ming YONG, Andrey VAKUNOV, Ivan GRISHCHENKO, Dmitry LAGUN, Matthias GRUNDMANN
  • Publication number: 20230017459
    Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Inventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
  • Publication number: 20220415030
    Abstract: The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object.
    Type: Application
    Filed: November 19, 2019
    Publication date: December 29, 2022
    Inventors: Tingbo Hou, Jianing Wei, Adel Ahmadyan, Matthias Grundmann
  • Patent number: 11494990
    Abstract: In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Bryan Woods, Jianing Wei, Sundeep Vaddadi, Cheng Yang, Konstantine Tsotsos, Keith Schaefer, Leon Wong, Keir Banks Mierle, Matthias Grundmann
  • Patent number: 11487407
    Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: November 1, 2022
    Assignee: Google LLC
    Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
  • Patent number: 11449714
    Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: September 20, 2022
    Assignee: GOOGLE LLC
    Inventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann