Patents by Inventor Taragay Oskiper
Taragay Oskiper has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240303860Abstract: A method, apparatus, and system for providing orientation and location estimates for a query ground image include determining spatial-aware features of a ground image and applying a model to the determined spatial-aware features to determine orientation and location estimates of the ground image.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Niluthpol MITHUN, Kshitij MINHAS, Han-Pang CHIU, Taragay OSKIPER, Mikhail SIZINTSEV, Supun SAMARASEKERA, Rakesh KUMAR
-
Patent number: 11423586Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: GrantFiled: January 25, 2021Date of Patent: August 23, 2022Assignee: SRI InternationalInventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Patent number: 11270426Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: GrantFiled: May 14, 2019Date of Patent: March 8, 2022Assignee: SRI InternationalInventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Publication number: 20210142530Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: ApplicationFiled: January 25, 2021Publication date: May 13, 2021Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Publication number: 20190347783Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: ApplicationFiled: May 14, 2019Publication date: November 14, 2019Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Patent number: 9892563Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.Type: GrantFiled: March 21, 2017Date of Patent: February 13, 2018Assignee: SRI InternationalInventors: Rakesh Kumar, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Yonga Kim Knowles
-
Patent number: 9734414Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: GrantFiled: August 25, 2015Date of Patent: August 15, 2017Assignee: SRI InternationalInventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Publication number: 20170024904Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: ApplicationFiled: October 5, 2016Publication date: January 26, 2017Inventors: Supun SAMARASEKERA, Taragay OSKIPER, Rakesh KUMAR, Mikhail SIZINTSEV, Vlad BRANZOI
-
Patent number: 9495783Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.Type: GrantFiled: June 13, 2013Date of Patent: November 15, 2016Assignee: SRI INTERNATIONALInventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
-
Publication number: 20160078303Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: August 25, 2015Publication date: March 17, 2016Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Patent number: 9121713Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: GrantFiled: April 19, 2012Date of Patent: September 1, 2015Assignee: SRI InternationalInventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Patent number: 9031809Abstract: A method and apparatus for providing three-dimensional navigation for a node comprising an inertial measurement unit for providing gyroscope, acceleration and velocity information (collectively IMU information); a ranging unit for providing distance information relative to at least one reference node; at least one visual sensor for providing images of an environment surrounding the node; a preprocessor, coupled to the inertial measurement unit, the ranging unit and the plurality of visual sensors, for generating error states for the IMU information, the distance information and the images; and an error-state predictive filter, coupled to the preprocessor, for processing the error states to produce a three-dimensional pose of the node.Type: GrantFiled: July 14, 2011Date of Patent: May 12, 2015Assignee: SRI InternationalInventors: Rakesh Kumar, Supun Samarasekera, Han-Pang Chiu, Zhiwei Zhu, Taragay Oskiper, Lu Wang, Raia Hadsell
-
Patent number: 8761439Abstract: An apparatus for providing three-dimensional pose comprising monocular visual sensors for providing images of an environment surrounding the apparatus, an inertial measurement unit (IMU) for providing gyroscope, acceleration and velocity information, collectively IMU information, a feature tracking module for generating feature tracking information for the images, and an error-state filter, coupled to the feature track module, the IMU and the one or more visual sensors, for correcting IMU information and producing a pose estimation based on at least one error-state model chosen according to the sensed images, the IMU information and the feature tracking information.Type: GrantFiled: August 24, 2011Date of Patent: June 24, 2014Assignee: SRI InternationalInventors: Rakesh Kumar, Supun Samarasekera, Taragay Oskiper
-
Patent number: 8305430Abstract: A visual odometry system and method for a fixed or known calibration of an arbitrary number of cameras in monocular configuration is provided. Images collected from each of the cameras in this distributed aperture system have negligible or absolutely no overlap. The relative pose and configuration of the cameras with respect to each other are assumed to be known and provide a means for determining the three-dimensional poses of all the cameras constrained in any given single camera pose. The cameras may be arranged in different configurations for different applications and are made suitable for mounting on a vehicle or person undergoing general motion. A complete parallel architecture is provided in conjunction with the implementation of the visual odometry method, so that real-time processing can be achieved on a multi-CPU system.Type: GrantFiled: September 18, 2006Date of Patent: November 6, 2012Assignee: SRI InternationalInventors: Taragay Oskiper, John Fields, Rakesh Kumar
-
Publication number: 20120206596Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: April 19, 2012Publication date: August 16, 2012Applicant: SRI INTERNATIONALInventors: SUPUN SAMARASEKERA, RAKESH KUMAR, TARAGAY OSKIPER, ZHIWEI ZHU, OLEG NARODITSKY, HARPREET SAWHNEY
-
Patent number: 8174568Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: GrantFiled: December 3, 2007Date of Patent: May 8, 2012Assignee: SRI InternationalInventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Patent number: 7925049Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.Type: GrantFiled: August 3, 2007Date of Patent: April 12, 2011Assignee: SRI InternationalInventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
-
Publication number: 20080167814Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: December 3, 2007Publication date: July 10, 2008Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Publication number: 20080144925Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.Type: ApplicationFiled: August 3, 2007Publication date: June 19, 2008Inventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
-
Publication number: 20070115352Abstract: A visual odometry system and method for a fixed or known calibration of an arbitrary number of cameras in monocular configuration is provided. Images collected from each of the cameras in this distributed aperture system have negligible or absolutely no overlap. The relative pose and configuration of the cameras with respect to each other are assumed to be known and provide a means for determining the three-dimensional poses of all the cameras constrained in any given single camera pose. The cameras may be arranged in different configurations for different applications and are made suitable for mounting on a vehicle or person undergoing general motion. A complete parallel architecture is provided in conjunction with the implementation of the visual odometry method, so that real-time processing can be achieved on a multi-CPU system.Type: ApplicationFiled: September 18, 2006Publication date: May 24, 2007Inventors: Taragay Oskiper, John Fields, Rakesh Kumar