Patents by Inventor Oleg Naroditsky

Oleg Naroditsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9600067
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Grant
    Filed: October 27, 2009
    Date of Patent: March 21, 2017
    Assignee: SRI International
    Inventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
  • Publication number: 20160350926
    Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.
    Type: Application
    Filed: August 12, 2016
    Publication date: December 1, 2016
    Inventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
  • Patent number: 9424647
    Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: August 23, 2016
    Assignees: Apple Inc., Regents of the University of Minnesota
    Inventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
  • Publication number: 20160078303
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 17, 2016
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 9121713
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: September 1, 2015
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Publication number: 20150043784
    Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.
    Type: Application
    Filed: August 12, 2014
    Publication date: February 12, 2015
    Inventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
  • Patent number: 8854446
    Abstract: A method of capturing image data for iris code based identification of vertebrates, including humans, comprises the steps of: recording a digital image of an eye with a camera equipped with at least two light sources that have a fixed spatial relationship to an object lens of the camera; locating the eye in the digital image by detecting a specularity pattern that is created by reflection of light from said at least two light sources at a cornea of the eye; and calculating information on the position of the camera relative to the eye on the basis of said fixed spatial relationship between the light sources and the object lens and on the basis of said specularity pattern.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: October 7, 2014
    Assignees: Iristrac, LLC, SRI International
    Inventors: James Russell Bergen, Oleg Naroditsky
  • Patent number: 8755607
    Abstract: A method of normalizing a digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: determining a pupil region in the image as a convex region having a boundary that can only be described by more than five independent parameters; determining, in the image, an outer boundary of the iris; and transforming an image of a ring shaped iris region that surrounds the pupil region into a coordinate system in which each point of the iris region is described by a first coordinate that indicates the position of the point along the boundary of the pupil and a second coordinate that indicates the distance of the point from said boundary, said second coordinate having a constant value when the point is located on the outer boundary of the iris.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: June 17, 2014
    Assignees: SRI International, Iristrac, LLC
    Inventors: James Russell Bergen, Oleg Naroditsky
  • Patent number: 8639058
    Abstract: The present invention pertains to a method of generating a normalized digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: capturing one or more digital images of the eye with a camera; constructing a plurality of imaginary outer iris boundaries in the one or more digital images, based on a known dimension of the outer iris boundary of the eye of a given species of vertebrates; and using said imaginary boundaries for transforming the one or more digital images into a plurality of normalized iris image that are insensitive to variations in a dimension of a pupil of the eye.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: January 28, 2014
    Assignee: SRI International
    Inventors: James Russell Bergen, Oleg Naroditsky
  • Publication number: 20120274756
    Abstract: A method of capturing image data for iris code based identification of vertebrates, including humans, comprises the steps of: recording a digital image of an eye with a camera equipped with at least two light sources that have a fixed spatial relationship to an object lens of the camera; locating the eye in the digital image by detecting a specularity pattern that is created by reflection of light from said at least two light sources at a cornea of the eye; and calculating information on the position of the camera relative to the eye on the basis of said fixed spatial relationship between the light sources and the object lens and on the basis of said specularity pattern.
    Type: Application
    Filed: April 28, 2011
    Publication date: November 1, 2012
    Inventors: James Russell Bergen, Oleg Naroditsky
  • Publication number: 20120275707
    Abstract: A method of normalizing a digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: determining a pupil region in the image as a convex region having a boundary that can only be described by more than five independent parameters; determining, in the image, an outer boundary of the iris; and transforming an image of a ring shaped iris region that surrounds the pupil region into a coordinate system in which each point of the iris region is described by a first coordinate that indicates the position of the point along the boundary of the pupil and a second coordinate that indicates the distance of the point from said boundary, said second coordinate having a constant value when the point is located on the outer boundary of the iris.
    Type: Application
    Filed: April 28, 2011
    Publication date: November 1, 2012
    Inventors: James Russell BERGEN, Oleg Naroditsky
  • Publication number: 20120275665
    Abstract: The present invention pertains to a method of generating a normalized digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: capturing one or more digital images of the eye with a camera; constructing a plurality of imaginary outer iris boundaries in the one or more digital images, based on a known dimension of the outer iris boundary of the eye of a given species of vertebrates; and using said imaginary boundaries for transforming the one or more digital images into a plurality of normalized iris image that are insensitive to variations in a dimension of a pupil of the eye.
    Type: Application
    Filed: April 28, 2011
    Publication date: November 1, 2012
    Inventors: James Russell BERGEN, Oleg Naroditsky
  • Publication number: 20120206596
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: April 19, 2012
    Publication date: August 16, 2012
    Applicant: SRI INTERNATIONAL
    Inventors: SUPUN SAMARASEKERA, RAKESH KUMAR, TARAGAY OSKIPER, ZHIWEI ZHU, OLEG NARODITSKY, HARPREET SAWHNEY
  • Patent number: 8174568
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: May 8, 2012
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 7925049
    Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.
    Type: Grant
    Filed: August 3, 2007
    Date of Patent: April 12, 2011
    Assignee: SRI International
    Inventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
  • Publication number: 20100103196
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Application
    Filed: October 27, 2009
    Publication date: April 29, 2010
    Inventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
  • Patent number: 7657127
    Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.
    Type: Grant
    Filed: April 24, 2009
    Date of Patent: February 2, 2010
    Assignee: Sarnoff Corporation
    Inventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
  • Publication number: 20090232418
    Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.
    Type: Application
    Filed: April 24, 2009
    Publication date: September 17, 2009
    Inventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
  • Patent number: 7542628
    Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.
    Type: Grant
    Filed: January 19, 2006
    Date of Patent: June 2, 2009
    Assignee: Sarnoff Corporation
    Inventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
  • Publication number: 20080167814
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: December 3, 2007
    Publication date: July 10, 2008
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney