Patents by Inventor Matthias Grundmann
Matthias Grundmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250138704Abstract: An example method includes presenting a user interface facilitating a creation of a video from an image associated with a first media item of a plurality of media items, wherein the first media item comprises the image and a video clip that are captured concurrently, receiving user input via the user interface, wherein the user input comprises a selection of a selectable control element presented in the user interface, and upon receiving the user input, presenting the video clip of the first media item in the user interface, wherein the video clip of the first media item is played in the user interface and comprises video content from before and after the image is captured.Type: ApplicationFiled: January 6, 2025Publication date: May 1, 2025Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 12272096Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: GrantFiled: June 15, 2023Date of Patent: April 8, 2025Assignee: GOOGLE LLCInventors: Jianing Wei, Matthias Grundmann
-
Patent number: 12189921Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.Type: GrantFiled: August 14, 2023Date of Patent: January 7, 2025Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Publication number: 20240412334Abstract: Systems, methods, devices, and related techniques for accelerating execution of diffusion models or of other neural networks that involve similar operations. Some aspects include accelerating inference computations in neural networks, including inference computations utilized in denoising (also referred to as “diffusion”) neural networks.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Inventors: Raman Sarokin, Yu-Hui Chen, Juhyun Lee, Jiuqiang Tang, Chuo-Ling Chang, Andrei Kulik, Matthias Grundmann
-
Publication number: 20240370717Abstract: A method for a cross-platform distillation framework includes obtaining a plurality of training samples. The method includes generating, using a student neural network model executing on a first processing unit, a first output based on a first training sample. The method also includes generating, using a teacher neural network model executing on a second processing unit, a second output based on the first training sample. The method includes determining, based on the first output and the second output, a first loss. The method further includes adjusting, based on the first loss, one or more parameters of the student neural network model. The method includes repeating the above steps for each training sample of the plurality of training samples.Type: ApplicationFiled: May 5, 2023Publication date: November 7, 2024Applicant: Google LLCInventors: Qifei Wang, Yicheng Fan, Wei Xu, Jiayu Ye, Lu Wang, Chuo-Ling Chang, Dana Alon, Erik Nathan Vee, Hongkun Yu, Matthias Grundmann, Shanmugasundaram Ravikumar, Andrew Stephen Tomkins
-
Publication number: 20230410329Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.Type: ApplicationFiled: September 1, 2023Publication date: December 21, 2023Inventors: Valentin Bazarevsky, Fan Zhang, Andrei Tkachenka, Andrei Vakunov, Matthias Grundmann
-
Publication number: 20230384911Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.Type: ApplicationFiled: August 14, 2023Publication date: November 30, 2023Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Publication number: 20230351724Abstract: The present disclosure is directed to systems and methods for performing object detection and pose estimation in 3D from 2D images. Object detection can be performed by a machine-learned model configured to determine various object properties. Implementations according to the disclosure can use these properties to estimate object pose and size.Type: ApplicationFiled: February 18, 2020Publication date: November 2, 2023Inventors: Tingbo Hou, Adel Ahmadyan, Jianing Wei, Matthias Grundmann
-
Publication number: 20230326073Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: ApplicationFiled: June 15, 2023Publication date: October 12, 2023Inventors: Jianing Wei, Matthias Grundmann
-
Patent number: 11783496Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.Type: GrantFiled: November 16, 2021Date of Patent: October 10, 2023Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Fan Zhang, Andrei Vakunov, Andrei Tkachenka, Matthias Grundmann
-
Patent number: 11770551Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.Type: GrantFiled: December 15, 2020Date of Patent: September 26, 2023Assignee: Google LLCInventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
-
Patent number: 11726637Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.Type: GrantFiled: October 31, 2022Date of Patent: August 15, 2023Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 11721039Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: GrantFiled: May 16, 2022Date of Patent: August 8, 2023Assignee: GOOGLE LLCInventors: Jianing Wei, Matthias Grundmann
-
Patent number: 11694087Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: GrantFiled: September 19, 2022Date of Patent: July 4, 2023Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
-
Publication number: 20230033956Abstract: Example embodiments relate to estimating depth information based on iris size. A computing system may obtain an image depicting a person and determine a facial mesh for a face of the person based on features of the face. In some instances, the facial mesh includes a combination of facial landmarks and eye landmarks. As such, the computing system may estimate an iris pixel dimension of an eye based on the eye landmarks of the facial mesh and estimate a distance of the eye of the face relative to the camera based on the iris pixel dimension, a mean value iris dimension, and an intrinsic matrix of the camera. The computing system may further modify the image based on the estimated distance.Type: ApplicationFiled: May 21, 2020Publication date: February 2, 2023Inventors: Ming YONG, Andrey VAKUNOV, Ivan GRISHCHENKO, Dmitry LAGUN, Matthias GRUNDMANN
-
Publication number: 20230017459Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: ApplicationFiled: September 19, 2022Publication date: January 19, 2023Inventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
-
Publication number: 20220415030Abstract: The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object.Type: ApplicationFiled: November 19, 2019Publication date: December 29, 2022Inventors: Tingbo Hou, Jianing Wei, Adel Ahmadyan, Matthias Grundmann
-
Patent number: 11494990Abstract: In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.Type: GrantFiled: October 7, 2019Date of Patent: November 8, 2022Assignee: Google LLCInventors: Bryan Woods, Jianing Wei, Sundeep Vaddadi, Cheng Yang, Konstantine Tsotsos, Keith Schaefer, Leon Wong, Keir Banks Mierle, Matthias Grundmann
-
Patent number: 11487407Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.Type: GrantFiled: November 29, 2021Date of Patent: November 1, 2022Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 11449714Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: GrantFiled: October 30, 2019Date of Patent: September 20, 2022Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann