Patents by Inventor Taragay Oskiper

Taragay Oskiper has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11423586
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 23, 2022
    Assignee: SRI International
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Patent number: 11270426
    Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: March 8, 2022
    Assignee: SRI International
    Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
  • Publication number: 20210142530
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: January 25, 2021
    Publication date: May 13, 2021
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Publication number: 20190347783
    Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
    Type: Application
    Filed: May 14, 2019
    Publication date: November 14, 2019
    Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
  • Patent number: 9892563
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: February 13, 2018
    Assignee: SRI International
    Inventors: Rakesh Kumar, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Yonga Kim Knowles
  • Patent number: 9734414
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: August 15, 2017
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Publication number: 20170024904
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: October 5, 2016
    Publication date: January 26, 2017
    Inventors: Supun SAMARASEKERA, Taragay OSKIPER, Rakesh KUMAR, Mikhail SIZINTSEV, Vlad BRANZOI
  • Patent number: 9495783
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: November 15, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Publication number: 20160078303
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 17, 2016
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 9121713
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: September 1, 2015
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 9031809
    Abstract: A method and apparatus for providing three-dimensional navigation for a node comprising an inertial measurement unit for providing gyroscope, acceleration and velocity information (collectively IMU information); a ranging unit for providing distance information relative to at least one reference node; at least one visual sensor for providing images of an environment surrounding the node; a preprocessor, coupled to the inertial measurement unit, the ranging unit and the plurality of visual sensors, for generating error states for the IMU information, the distance information and the images; and an error-state predictive filter, coupled to the preprocessor, for processing the error states to produce a three-dimensional pose of the node.
    Type: Grant
    Filed: July 14, 2011
    Date of Patent: May 12, 2015
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Han-Pang Chiu, Zhiwei Zhu, Taragay Oskiper, Lu Wang, Raia Hadsell
  • Patent number: 8761439
    Abstract: An apparatus for providing three-dimensional pose comprising monocular visual sensors for providing images of an environment surrounding the apparatus, an inertial measurement unit (IMU) for providing gyroscope, acceleration and velocity information, collectively IMU information, a feature tracking module for generating feature tracking information for the images, and an error-state filter, coupled to the feature track module, the IMU and the one or more visual sensors, for correcting IMU information and producing a pose estimation based on at least one error-state model chosen according to the sensed images, the IMU information and the feature tracking information.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: June 24, 2014
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Taragay Oskiper
  • Patent number: 8305430
    Abstract: A visual odometry system and method for a fixed or known calibration of an arbitrary number of cameras in monocular configuration is provided. Images collected from each of the cameras in this distributed aperture system have negligible or absolutely no overlap. The relative pose and configuration of the cameras with respect to each other are assumed to be known and provide a means for determining the three-dimensional poses of all the cameras constrained in any given single camera pose. The cameras may be arranged in different configurations for different applications and are made suitable for mounting on a vehicle or person undergoing general motion. A complete parallel architecture is provided in conjunction with the implementation of the visual odometry method, so that real-time processing can be achieved on a multi-CPU system.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: November 6, 2012
    Assignee: SRI International
    Inventors: Taragay Oskiper, John Fields, Rakesh Kumar
  • Publication number: 20120206596
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: April 19, 2012
    Publication date: August 16, 2012
    Applicant: SRI INTERNATIONAL
    Inventors: SUPUN SAMARASEKERA, RAKESH KUMAR, TARAGAY OSKIPER, ZHIWEI ZHU, OLEG NARODITSKY, HARPREET SAWHNEY
  • Patent number: 8174568
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: May 8, 2012
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 7925049
    Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.
    Type: Grant
    Filed: August 3, 2007
    Date of Patent: April 12, 2011
    Assignee: SRI International
    Inventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
  • Publication number: 20080167814
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: December 3, 2007
    Publication date: July 10, 2008
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Publication number: 20080144925
    Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.
    Type: Application
    Filed: August 3, 2007
    Publication date: June 19, 2008
    Inventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
  • Publication number: 20070115352
    Abstract: A visual odometry system and method for a fixed or known calibration of an arbitrary number of cameras in monocular configuration is provided. Images collected from each of the cameras in this distributed aperture system have negligible or absolutely no overlap. The relative pose and configuration of the cameras with respect to each other are assumed to be known and provide a means for determining the three-dimensional poses of all the cameras constrained in any given single camera pose. The cameras may be arranged in different configurations for different applications and are made suitable for mounting on a vehicle or person undergoing general motion. A complete parallel architecture is provided in conjunction with the implementation of the visual odometry method, so that real-time processing can be achieved on a multi-CPU system.
    Type: Application
    Filed: September 18, 2006
    Publication date: May 24, 2007
    Inventors: Taragay Oskiper, John Fields, Rakesh Kumar
  • Publication number: 20070070069
    Abstract: The present invention provides a system and method for processing real-time rapid capture, annotation and creation of an annotated hyper-video map for environments. The method includes processing video, audio and GPS data to create the hyper-video map which is further enhanced with textual, audio and hyperlink annotations that will enable the user to see, hear, and operate in an environment with cognitive awareness. Thus, this annotated hyper-video map provides a seamlessly navigable, situational awareness and indexable high-fidelity immersive visualization of the environment.
    Type: Application
    Filed: September 26, 2006
    Publication date: March 29, 2007
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Harpreet Sawhney, Manoj Aggarwal