Patents by Inventor Mikhail Sizintsev

Mikhail Sizintsev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220299592
    Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 22, 2022
    Inventors: Han-Pang Chiu, Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Glenn A. Murray, Supun Samarasekera
  • Patent number: 11423586
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 23, 2022
    Assignee: SRI International
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Patent number: 11313684
    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: April 26, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Mikhail Sizintsev, Xun Zhou, Philip Miller, Glenn Murray
  • Publication number: 20220108455
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 7, 2022
    Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA
  • Patent number: 11270426
    Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: March 8, 2022
    Assignee: SRI International
    Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
  • Publication number: 20210142530
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: January 25, 2021
    Publication date: May 13, 2021
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Publication number: 20200300637
    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.
    Type: Application
    Filed: March 28, 2017
    Publication date: September 24, 2020
    Inventors: Han-Pang CHIU, Supun SAMARASEKERA, Rakesh KUMAR, Mikhail SIZINTSEV, Xun ZHOU, Philip MILLER, Glenn MURRAY
  • Publication number: 20190347783
    Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 14, 2019
    Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
  • Patent number: 9872968
    Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: January 23, 2018
    Assignee: SRI INTERNATIONAL
    Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
  • Publication number: 20170024904
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: October 5, 2016
    Publication date: January 26, 2017
    Inventors: Supun SAMARASEKERA, Taragay OSKIPER, Rakesh KUMAR, Mikhail SIZINTSEV, Vlad BRANZOI
  • Patent number: 9495783
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: November 15, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Publication number: 20140316192
    Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
    Type: Application
    Filed: April 16, 2014
    Publication date: October 23, 2014
    Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
  • Publication number: 20140316191
    Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
    Type: Application
    Filed: April 11, 2014
    Publication date: October 23, 2014
    Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
  • Patent number: 8385630
    Abstract: The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries.
    Type: Grant
    Filed: December 29, 2010
    Date of Patent: February 26, 2013
    Assignee: SRI International
    Inventors: Mikhail Sizintsev, Sujit Kuthirummal, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney
  • Publication number: 20110176722
    Abstract: The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries.
    Type: Application
    Filed: December 29, 2010
    Publication date: July 21, 2011
    Inventors: Mikhail Sizintsev, Sujit Kuthirummal, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney