Patents by Inventor Daniyar Turmukhambetov

Daniyar Turmukhambetov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046610
    Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.
    Type: Application
    Filed: October 13, 2023
    Publication date: February 8, 2024
    Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Publication number: 20230410349
    Abstract: A method or a system for map-free visual relocalization of a device. The system obtains a reference image of an environment captured by a reference pose. The system also receives a query image taken by a camera of the device. The system determines a relative pose of the camera of the device relative to the reference camera based in part on the reference image and the query image. The system determines a pose of the query camera in the environment based on the reference pose and the relative pose.
    Type: Application
    Filed: June 20, 2023
    Publication date: December 21, 2023
    Inventors: Eduardo Henrique Arnold, Jamie Michael Wynn, Guillermo Garcia-Hernando, Sara Alexandra Gomes Vicente, Aron Monszpart, Victor Adrian Prisacariu, Daniyar Turmukhambetov, Eric Brachmann, Axel Barroso-Laguna
  • Patent number: 11836965
    Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: December 5, 2023
    Assignee: NIANTIC, INC.
    Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Patent number: 11805236
    Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: October 31, 2023
    Assignee: NIANTIC, INC.
    Inventors: James Watson, Oisin MacAodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
  • Patent number: 11711508
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: July 25, 2023
    Assignee: Niantic, Inc.
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Publication number: 20220383449
    Abstract: A depth prediction model for predicting a depth map from an input image is disclosed. The depth prediction model leverages wavelet decomposition to minimize computations. The depth prediction model comprises a plurality of encoding layers, a coarse prediction layer, a plurality of decoding layers, and a plurality of inverse discrete wavelet transforms (IDWTs). The encoding layers are configured to input the image and to downsample the image into feature maps including a coarse feature map. The coarse depth prediction layer is configured to input the coarse feature map and to output a coarse depth map. The decoding layers are configured to input the feature maps and to predict wavelet coefficients based on the feature maps. The IDWTs are configured to upsample the coarse depth map based on the predicted wavelet coefficients to the final depth map at the same resolution as the input image.
    Type: Application
    Filed: May 20, 2022
    Publication date: December 1, 2022
    Inventors: Michaƫl Lalaina Ramamonjisoa, Michael David Firman, James Watson, Daniyar Turmukhambetov
  • Publication number: 20220351518
    Abstract: The present disclosure describes approaches for evaluating interest points for localization uses based on a repeatability of the detection of the interest point in images capturing a scene that includes the interest point. The repeatability of interest points is determined by using a trained repeatability model. The repeatability model is trained by analyzing a time series of images of a scene and determining repeatability functions for each interest point in the scene. The repeatability function is determined by identifying which images in the time series of images allowed for the detection of the interest point by an interest point detection model.
    Type: Application
    Filed: April 27, 2022
    Publication date: November 3, 2022
    Inventors: Dung Anh Doan, Daniyar Turmukhambetov, Soohyun Bae
  • Publication number: 20220210392
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Application
    Filed: March 16, 2022
    Publication date: June 30, 2022
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Patent number: 11317079
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: April 26, 2022
    Assignee: Niantic, Inc.
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Publication number: 20220051372
    Abstract: An image localization system receives an image of a scene and generates a depth map for the image by inputting the image to a model trained for generating depth maps for images. The system determines surface normal vectors for the pixels in the depth map. The system clusters the surface normal vectors to identify regions in the image corresponding to planar surfaces. The system partitions the image into patches, each of which is a region of connected pixels in the image and corresponds to a cluster of surface normal vectors. The system rectifies the perspective distortion of patches and extracts perspective corrected features from the rectified patches. The system matches the perspective corrected features of the image with perspective corrected features of other images for three-dimensional re-localization.
    Type: Application
    Filed: August 6, 2021
    Publication date: February 17, 2022
    Inventors: Carl Sebastian Toft, Daniyar Turmukhambetov, Gabriel J. Brostow
  • Publication number: 20220051048
    Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.
    Type: Application
    Filed: August 10, 2021
    Publication date: February 17, 2022
    Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Publication number: 20210352261
    Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.
    Type: Application
    Filed: May 11, 2021
    Publication date: November 11, 2021
    Inventors: James Watson, Oisin Mac Aodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20210218950
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Application
    Filed: March 26, 2021
    Publication date: July 15, 2021
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Patent number: 11044462
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: June 22, 2021
    Assignee: Niantic, Inc.
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
  • Publication number: 20200351489
    Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 5, 2020
    Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov