Patents by Inventor Mikhail Sizintsev
Mikhail Sizintsev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240303860Abstract: A method, apparatus, and system for providing orientation and location estimates for a query ground image include determining spatial-aware features of a ground image and applying a model to the determined spatial-aware features to determine orientation and location estimates of the ground image.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Niluthpol MITHUN, Kshitij MINHAS, Han-Pang CHIU, Taragay OSKIPER, Mikhail SIZINTSEV, Supun SAMARASEKERA, Rakesh KUMAR
-
Patent number: 12062186Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.Type: GrantFiled: October 7, 2021Date of Patent: August 13, 2024Assignee: SRI InternationalInventors: Han-Pang Chiu, Junjiao Tian, Zachary Seymour, Niluthpol C. Mithun, Alex Krasner, Mikhail Sizintsev, Abhinav Rajvanshi, Kevin Kaighn, Philip Miller, Ryan Villamil, Supun Samarasekera
-
Publication number: 20220299592Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.Type: ApplicationFiled: March 15, 2022Publication date: September 22, 2022Inventors: Han-Pang Chiu, Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Glenn A. Murray, Supun Samarasekera
-
Patent number: 11423586Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: GrantFiled: January 25, 2021Date of Patent: August 23, 2022Assignee: SRI InternationalInventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Patent number: 11313684Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.Type: GrantFiled: March 28, 2017Date of Patent: April 26, 2022Assignee: SRI InternationalInventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Mikhail Sizintsev, Xun Zhou, Philip Miller, Glenn Murray
-
Publication number: 20220108455Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.Type: ApplicationFiled: October 7, 2021Publication date: April 7, 2022Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA
-
Patent number: 11270426Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: GrantFiled: May 14, 2019Date of Patent: March 8, 2022Assignee: SRI InternationalInventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Publication number: 20210142530Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: ApplicationFiled: January 25, 2021Publication date: May 13, 2021Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Publication number: 20200300637Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.Type: ApplicationFiled: March 28, 2017Publication date: September 24, 2020Inventors: Han-Pang CHIU, Supun SAMARASEKERA, Rakesh KUMAR, Mikhail SIZINTSEV, Xun ZHOU, Philip MILLER, Glenn MURRAY
-
Publication number: 20190347783Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: ApplicationFiled: May 14, 2019Publication date: November 14, 2019Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Patent number: 9872968Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.Type: GrantFiled: April 11, 2014Date of Patent: January 23, 2018Assignee: SRI INTERNATIONALInventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
-
Publication number: 20170024904Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: ApplicationFiled: October 5, 2016Publication date: January 26, 2017Inventors: Supun SAMARASEKERA, Taragay OSKIPER, Rakesh KUMAR, Mikhail SIZINTSEV, Vlad BRANZOI
-
Patent number: 9495783Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: GrantFiled: June 13, 2013Date of Patent: November 15, 2016Assignee: SRI INTERNATIONALInventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Publication number: 20140316192Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.Type: ApplicationFiled: April 16, 2014Publication date: October 23, 2014Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
-
Publication number: 20140316191Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.Type: ApplicationFiled: April 11, 2014Publication date: October 23, 2014Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
-
Patent number: 8385630Abstract: The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries.Type: GrantFiled: December 29, 2010Date of Patent: February 26, 2013Assignee: SRI InternationalInventors: Mikhail Sizintsev, Sujit Kuthirummal, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney
-
Publication number: 20110176722Abstract: The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries.Type: ApplicationFiled: December 29, 2010Publication date: July 21, 2011Inventors: Mikhail Sizintsev, Sujit Kuthirummal, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney