Patents by Inventor Daniyar Turmukhambetov
Daniyar Turmukhambetov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046610Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.Type: ApplicationFiled: October 13, 2023Publication date: February 8, 2024Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Publication number: 20230410349Abstract: A method or a system for map-free visual relocalization of a device. The system obtains a reference image of an environment captured by a reference pose. The system also receives a query image taken by a camera of the device. The system determines a relative pose of the camera of the device relative to the reference camera based in part on the reference image and the query image. The system determines a pose of the query camera in the environment based on the reference pose and the relative pose.Type: ApplicationFiled: June 20, 2023Publication date: December 21, 2023Inventors: Eduardo Henrique Arnold, Jamie Michael Wynn, Guillermo Garcia-Hernando, Sara Alexandra Gomes Vicente, Aron Monszpart, Victor Adrian Prisacariu, Daniyar Turmukhambetov, Eric Brachmann, Axel Barroso-Laguna
-
Patent number: 11836965Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.Type: GrantFiled: August 10, 2021Date of Patent: December 5, 2023Assignee: NIANTIC, INC.Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Patent number: 11805236Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.Type: GrantFiled: May 11, 2021Date of Patent: October 31, 2023Assignee: NIANTIC, INC.Inventors: James Watson, Oisin MacAodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
-
Patent number: 11711508Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: GrantFiled: March 16, 2022Date of Patent: July 25, 2023Assignee: Niantic, Inc.Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Publication number: 20220383449Abstract: A depth prediction model for predicting a depth map from an input image is disclosed. The depth prediction model leverages wavelet decomposition to minimize computations. The depth prediction model comprises a plurality of encoding layers, a coarse prediction layer, a plurality of decoding layers, and a plurality of inverse discrete wavelet transforms (IDWTs). The encoding layers are configured to input the image and to downsample the image into feature maps including a coarse feature map. The coarse depth prediction layer is configured to input the coarse feature map and to output a coarse depth map. The decoding layers are configured to input the feature maps and to predict wavelet coefficients based on the feature maps. The IDWTs are configured to upsample the coarse depth map based on the predicted wavelet coefficients to the final depth map at the same resolution as the input image.Type: ApplicationFiled: May 20, 2022Publication date: December 1, 2022Inventors: Michaƫl Lalaina Ramamonjisoa, Michael David Firman, James Watson, Daniyar Turmukhambetov
-
Publication number: 20220351518Abstract: The present disclosure describes approaches for evaluating interest points for localization uses based on a repeatability of the detection of the interest point in images capturing a scene that includes the interest point. The repeatability of interest points is determined by using a trained repeatability model. The repeatability model is trained by analyzing a time series of images of a scene and determining repeatability functions for each interest point in the scene. The repeatability function is determined by identifying which images in the time series of images allowed for the detection of the interest point by an interest point detection model.Type: ApplicationFiled: April 27, 2022Publication date: November 3, 2022Inventors: Dung Anh Doan, Daniyar Turmukhambetov, Soohyun Bae
-
Publication number: 20220210392Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: ApplicationFiled: March 16, 2022Publication date: June 30, 2022Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Patent number: 11317079Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: GrantFiled: March 26, 2021Date of Patent: April 26, 2022Assignee: Niantic, Inc.Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Publication number: 20220051372Abstract: An image localization system receives an image of a scene and generates a depth map for the image by inputting the image to a model trained for generating depth maps for images. The system determines surface normal vectors for the pixels in the depth map. The system clusters the surface normal vectors to identify regions in the image corresponding to planar surfaces. The system partitions the image into patches, each of which is a region of connected pixels in the image and corresponds to a cluster of surface normal vectors. The system rectifies the perspective distortion of patches and extracts perspective corrected features from the rectified patches. The system matches the perspective corrected features of the image with perspective corrected features of other images for three-dimensional re-localization.Type: ApplicationFiled: August 6, 2021Publication date: February 17, 2022Inventors: Carl Sebastian Toft, Daniyar Turmukhambetov, Gabriel J. Brostow
-
Publication number: 20220051048Abstract: An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.Type: ApplicationFiled: August 10, 2021Publication date: February 17, 2022Inventors: Anita Rau, Guillermo Garcia-Hernando, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Publication number: 20210352261Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.Type: ApplicationFiled: May 11, 2021Publication date: November 11, 2021Inventors: James Watson, Oisin Mac Aodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
-
Publication number: 20210218950Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: ApplicationFiled: March 26, 2021Publication date: July 15, 2021Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Patent number: 11044462Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: GrantFiled: May 1, 2020Date of Patent: June 22, 2021Assignee: Niantic, Inc.Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov
-
Publication number: 20200351489Abstract: A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.Type: ApplicationFiled: May 1, 2020Publication date: November 5, 2020Inventors: James Watson, Michael David Firman, Gabriel J. Brostow, Daniyar Turmukhambetov