Patents by Inventor Matthias Grundmann
Matthias Grundmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230410329Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.Type: ApplicationFiled: September 1, 2023Publication date: December 21, 2023Inventors: Valentin Bazarevsky, Fan Zhang, Andrei Tkachenka, Andrei Vakunov, Matthias Grundmann
-
Publication number: 20230384911Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.Type: ApplicationFiled: August 14, 2023Publication date: November 30, 2023Inventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Publication number: 20230351724Abstract: The present disclosure is directed to systems and methods for performing object detection and pose estimation in 3D from 2D images. Object detection can be performed by a machine-learned model configured to determine various object properties. Implementations according to the disclosure can use these properties to estimate object pose and size.Type: ApplicationFiled: February 18, 2020Publication date: November 2, 2023Inventors: Tingbo Hou, Adel Ahmadyan, Jianing Wei, Matthias Grundmann
-
Publication number: 20230326073Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: ApplicationFiled: June 15, 2023Publication date: October 12, 2023Inventors: Jianing Wei, Matthias Grundmann
-
Patent number: 11783496Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.Type: GrantFiled: November 16, 2021Date of Patent: October 10, 2023Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Fan Zhang, Andrei Vakunov, Andrei Tkachenka, Matthias Grundmann
-
Patent number: 11770551Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.Type: GrantFiled: December 15, 2020Date of Patent: September 26, 2023Assignee: Google LLCInventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
-
Patent number: 11726637Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.Type: GrantFiled: October 31, 2022Date of Patent: August 15, 2023Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 11721039Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: GrantFiled: May 16, 2022Date of Patent: August 8, 2023Assignee: GOOGLE LLCInventors: Jianing Wei, Matthias Grundmann
-
Patent number: 11694087Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: GrantFiled: September 19, 2022Date of Patent: July 4, 2023Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
-
Publication number: 20230033956Abstract: Example embodiments relate to estimating depth information based on iris size. A computing system may obtain an image depicting a person and determine a facial mesh for a face of the person based on features of the face. In some instances, the facial mesh includes a combination of facial landmarks and eye landmarks. As such, the computing system may estimate an iris pixel dimension of an eye based on the eye landmarks of the facial mesh and estimate a distance of the eye of the face relative to the camera based on the iris pixel dimension, a mean value iris dimension, and an intrinsic matrix of the camera. The computing system may further modify the image based on the estimated distance.Type: ApplicationFiled: May 21, 2020Publication date: February 2, 2023Inventors: Ming YONG, Andrey VAKUNOV, Ivan GRISHCHENKO, Dmitry LAGUN, Matthias GRUNDMANN
-
Publication number: 20230017459Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: ApplicationFiled: September 19, 2022Publication date: January 19, 2023Inventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
-
Publication number: 20220415030Abstract: The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object.Type: ApplicationFiled: November 19, 2019Publication date: December 29, 2022Inventors: Tingbo Hou, Jianing Wei, Adel Ahmadyan, Matthias Grundmann
-
Patent number: 11494990Abstract: In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.Type: GrantFiled: October 7, 2019Date of Patent: November 8, 2022Assignee: Google LLCInventors: Bryan Woods, Jianing Wei, Sundeep Vaddadi, Cheng Yang, Konstantine Tsotsos, Keith Schaefer, Leon Wong, Keir Banks Mierle, Matthias Grundmann
-
Patent number: 11487407Abstract: The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.Type: GrantFiled: November 29, 2021Date of Patent: November 1, 2022Assignee: Google LLCInventors: Matthias Grundmann, Jokubas Zukerman, Marco Paglia, Kenneth Conley, Karthik Raveendran, Reed Morse
-
Patent number: 11449714Abstract: A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.Type: GrantFiled: October 30, 2019Date of Patent: September 20, 2022Assignee: GOOGLE LLCInventors: Valentin Bazarevsky, Yury Kartynnik, Andrei Vakunov, Karthik Raveendran, Matthias Grundmann
-
Patent number: 11436755Abstract: Example embodiments allow for fast, efficient determination of bounding box vertices or other pose information for objects based on images of a scene that may contain the objects. An artificial neural network or other machine learning algorithm is used to generate, from an input image, a heat map and a number of pairs of displacement maps. The location of a peak within the heat map is then used to extract, from the displacement maps, the two-dimensional displacement, from the location of the peak within the image, of vertices of a bounding box that contains the object. This bounding box can then be used to determine the pose of the object within the scene. The artificial neural network can be configured to generate intermediate segmentation maps, coordinate maps, or other information about the shape of the object so as to improve the estimated bounding box.Type: GrantFiled: August 9, 2020Date of Patent: September 6, 2022Assignee: Google LLCInventors: Tingbo Hou, Matthias Grundmann, Liangkai Zhang, Jianing Wei, Adel Ahmadyan
-
Publication number: 20220270290Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: ApplicationFiled: May 16, 2022Publication date: August 25, 2022Inventors: Jianing Wei, Matthias Grundmann
-
Publication number: 20220191542Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.Type: ApplicationFiled: December 15, 2020Publication date: June 16, 2022Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
-
Patent number: 11341676Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.Type: GrantFiled: December 17, 2019Date of Patent: May 24, 2022Assignee: GOOGLE LLCInventors: Jianing Wei, Matthias Grundmann
-
Publication number: 20220076433Abstract: Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.Type: ApplicationFiled: November 16, 2021Publication date: March 10, 2022Inventors: Valentin Bazarevsky, Fan Zhang, Andrei Vakunov, Andrei Tkachenka, Matthias Grundmann