Patents by Inventor Jianing Wei

Jianing Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230351724
    Abstract: The present disclosure is directed to systems and methods for performing object detection and pose estimation in 3D from 2D images. Object detection can be performed by a machine-learned model configured to determine various object properties. Implementations according to the disclosure can use these properties to estimate object pose and size.
    Type: Application
    Filed: February 18, 2020
    Publication date: November 2, 2023
    Inventors: Tingbo Hou, Adel Ahmadyan, Jianing Wei, Matthias Grundmann
  • Publication number: 20230326073
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Application
    Filed: June 15, 2023
    Publication date: October 12, 2023
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 11770551
    Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
  • Patent number: 11721039
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Jianing Wei, Matthias Grundmann
  • Publication number: 20220415030
    Abstract: The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object.
    Type: Application
    Filed: November 19, 2019
    Publication date: December 29, 2022
    Inventors: Tingbo Hou, Jianing Wei, Adel Ahmadyan, Matthias Grundmann
  • Patent number: 11494990
    Abstract: In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Bryan Woods, Jianing Wei, Sundeep Vaddadi, Cheng Yang, Konstantine Tsotsos, Keith Schaefer, Leon Wong, Keir Banks Mierle, Matthias Grundmann
  • Patent number: 11436755
    Abstract: Example embodiments allow for fast, efficient determination of bounding box vertices or other pose information for objects based on images of a scene that may contain the objects. An artificial neural network or other machine learning algorithm is used to generate, from an input image, a heat map and a number of pairs of displacement maps. The location of a peak within the heat map is then used to extract, from the displacement maps, the two-dimensional displacement, from the location of the peak within the image, of vertices of a bounding box that contains the object. This bounding box can then be used to determine the pose of the object within the scene. The artificial neural network can be configured to generate intermediate segmentation maps, coordinate maps, or other information about the shape of the object so as to improve the estimated bounding box.
    Type: Grant
    Filed: August 9, 2020
    Date of Patent: September 6, 2022
    Assignee: Google LLC
    Inventors: Tingbo Hou, Matthias Grundmann, Liangkai Zhang, Jianing Wei, Adel Ahmadyan
  • Publication number: 20220270290
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Application
    Filed: May 16, 2022
    Publication date: August 25, 2022
    Inventors: Jianing Wei, Matthias Grundmann
  • Publication number: 20220191542
    Abstract: A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom Ablavatski, Matthias Grundmann
  • Patent number: 11341676
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: May 24, 2022
    Assignee: GOOGLE LLC
    Inventors: Jianing Wei, Matthias Grundmann
  • Publication number: 20220044439
    Abstract: Example embodiments allow for fast, efficient determination of bounding box vertices or other pose information for objects based on images of a scene that may contain the objects. An artificial neural network or other machine learning algorithm is used to generate, from an input image, a heat map and a number of pairs of displacement maps. The location of a peak within the heat map is then used to extract, from the displacement maps, the two-dimensional displacement, from the location of the peak within the image, of vertices of a bounding box that contains the object. This bounding box can then be used to determine the pose of the object within the scene. The artificial neural network can be configured to generate intermediate segmentation maps, coordinate maps, or other information about the shape of the object so as to improve the estimated bounding box.
    Type: Application
    Filed: August 9, 2020
    Publication date: February 10, 2022
    Inventors: Tingbo Hou, Matthias Grundmann, Liangkai Zhang, Jianing Wei, Adel Ahmadyan
  • Publication number: 20200250852
    Abstract: The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.
    Type: Application
    Filed: December 17, 2019
    Publication date: August 6, 2020
    Inventors: Jianing Wei, Matthias Grundmann
  • Patent number: 10200613
    Abstract: The disclosed technology includes techniques for providing improved video stabilization on a mobile device. Using gyroscope data of the mobile device, the physical camera orientation of the mobile device may be estimated over time. Using the physical camera orientation and historical data, corresponding virtual camera orientations representing a camera orientation with undesired rotational movement removed may be modeled using a non-linear filter to provide for mapping of a real image to a stabilized virtual image. The virtual camera orientation may be modified to prevent undefined pixels from appearing in the output image.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: February 5, 2019
    Assignee: Google LLC
    Inventors: Chia-Kai Liang, Xue Tu, Lun-Cheng Chu, Jianing Wei
  • Publication number: 20180115714
    Abstract: The disclosed technology includes techniques for providing improved video stabilization on a mobile device. Using gyroscope data of the mobile device, the physical camera orientation of the mobile device may be estimated over time. Using the physical camera orientation and historical data, corresponding virtual camera orientations representing a camera orientation with undesired rotational movement removed may be modeled using a non-linear filter to provide for mapping of a real image to a stabilized virtual image. The virtual camera orientation may be modified to prevent undefined pixels from appearing in the output image.
    Type: Application
    Filed: December 20, 2017
    Publication date: April 26, 2018
    Inventors: Chia-Kai Liang, Xue Tu, Lun-Cheng Chu, Jianing Wei
  • Patent number: 9888179
    Abstract: The disclosed technology includes techniques for providing improved video stabilization on a mobile device. Using gyroscope data of the mobile device, the physical camera orientation of the mobile device may be estimated over time. Using the physical camera orientation and historical data, corresponding virtual camera orientations representing a camera orientation with undesired rotational movement removed may be modeled using a non-linear filter to provide for mapping of a real image to a stabilized virtual image. The virtual camera orientation may be modified to prevent undefined pixels from appearing in the output image.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: February 6, 2018
    Assignee: Google LLC
    Inventors: Chia-Kai Liang, Xue Tu, Lun-Cheng Chu, Jianing Wei
  • Patent number: 9715721
    Abstract: Focus detection is to determine whether an image is in focus or not. Focus detection is able to be used for improving camera autofocus performance. Focus detection by using only one feature does not provide enough reliability to distinguish in-focus and slightly out-of-focus images. A focus detection algorithm of combining multiple features used to evaluate sharpness is described herein. A large image data set with in-focus and out-of-focus images is used to develop the focus detector for separating the in-focus images from out-of-focus images. Many features such as iterative blur estimation, FFT linearity, edge percentage, wavelet energy ratio, improved wavelet energy ratio, Chebyshev moment ratio and chromatic aberration features are able to be used to evaluate sharpness and determine big blur images.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: July 25, 2017
    Assignee: Sony Corporation
    Inventors: Pingshan Li, Jianing Wei, Xue Tu, Alexander Berestov, Takami Mizukura, Akira Matsui, Tomonori Shuda
  • Publication number: 20170178296
    Abstract: Focus detection is to determine whether an image is in focus or not. Focus detection is able to be used for improving camera autofocus performance. Focus detection by using only one feature does not provide enough reliability to distinguish in-focus and slightly out-of-focus images. A focus detection algorithm of combining multiple features used to evaluate sharpness is described herein. A large image data set with in-focus and out-of-focus images is used to develop the focus detector for separating the in-focus images from out-of-focus images. Many features such as iterative blur estimation, FFT linearity, edge percentage, wavelet energy ratio, improved wavelet energy ratio, Chebyshev moment ratio and chromatic aberration features are able to be used to evaluate sharpness and determine big blur images.
    Type: Application
    Filed: December 18, 2015
    Publication date: June 22, 2017
    Inventors: Pingshan Li, Jianing Wei, Xue Tu, Alexander Berestov, Takami Mizukura, Akira Matsui, Tomonori Shuda
  • Publication number: 20170171456
    Abstract: A first image capture component may capture a first image of a scene, and a second image capture component may capture a second image of the scene. There may be a particular baseline distance between the first image capture component and the second image capture component, and at least one of the first image capture component or the second image capture component may have a focal length. A disparity may be determined between a portion of the scene as represented in the first image and the portion of the scene as represented in the second image. Possibly based on the disparity, the particular baseline distance, and the focal length, a focus distance may be determined. The first image capture component and the second image capture component may be set to focus to the focus distance.
    Type: Application
    Filed: December 10, 2015
    Publication date: June 15, 2017
    Inventor: Jianing Wei
  • Patent number: 9656059
    Abstract: A cochlear stimulation device comprising an electrode array designed to provide enhanced charge injection capacity necessary for neural stimulation. The electrode array comprises electrodes with high surface area or a fractal geometry and correspondingly high electrode capacitance and low electrical impedance. The resultant electrodes have a robust surface and sufficient mechanical strength to withstand physical stress vital for long term stability. The device further comprises wire traces having a multilayer structure which provides a reduced width for the conducting part of the electrode array. The cochlear prosthesis is attached by a grommet to the cochleostomy that is made from a single piece of biocompatible polymer. The device, designed to achieve optimum neural stimulation by appropriate electrode design, is a significant improvement over commercially available hand-built devices.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: May 23, 2017
    Assignee: Second Sight Medical Products, Inc.
    Inventors: Robert J Greenberg, David D Zhou, Jordan Matthew Neysmith, Kelly H McClure, Jianing Wei, Neil H Talbot, James S Little
  • Patent number: 9646225
    Abstract: A defocus estimation algorithm is described herein. The defocus estimation algorithm utilizes a single image. A Laplacian of Gaussian approximation is determined by computing a difference of Gaussian. Defocus blur is able to be estimated by computing a blur difference between two images using the difference of Gaussian.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: May 9, 2017
    Assignee: Sony Corporation
    Inventors: Pingshan Li, Jianing Wei