Patents by Inventor Oleg Naroditsky
Oleg Naroditsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9600067Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.Type: GrantFiled: October 27, 2009Date of Patent: March 21, 2017Assignee: SRI InternationalInventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
-
Publication number: 20160350926Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.Type: ApplicationFiled: August 12, 2016Publication date: December 1, 2016Inventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
-
Patent number: 9424647Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.Type: GrantFiled: August 12, 2014Date of Patent: August 23, 2016Assignees: Apple Inc., Regents of the University of MinnesotaInventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
-
Publication number: 20160078303Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: August 25, 2015Publication date: March 17, 2016Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Patent number: 9121713Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: GrantFiled: April 19, 2012Date of Patent: September 1, 2015Assignee: SRI InternationalInventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Publication number: 20150043784Abstract: A method includes: receiving sensor measurements from a pre-processing module, in which the sensor measurements include image data and inertial data for a device; transferring, using a processor, information derived from the sensor measurements, from a first set of variables associated with a first window of time to a second set of variables associated with a second window of time, in which the first and second windows consecutively overlap in time; and outputting, to a post-processing module, a state of the device based on the transferred information.Type: ApplicationFiled: August 12, 2014Publication date: February 12, 2015Inventors: Alex Flint, Oleg Naroditsky, Christopher P. Broaddus, Andriy Grygorenko, Stergios Roumeliotis, Oriel Bergig
-
Patent number: 8854446Abstract: A method of capturing image data for iris code based identification of vertebrates, including humans, comprises the steps of: recording a digital image of an eye with a camera equipped with at least two light sources that have a fixed spatial relationship to an object lens of the camera; locating the eye in the digital image by detecting a specularity pattern that is created by reflection of light from said at least two light sources at a cornea of the eye; and calculating information on the position of the camera relative to the eye on the basis of said fixed spatial relationship between the light sources and the object lens and on the basis of said specularity pattern.Type: GrantFiled: April 28, 2011Date of Patent: October 7, 2014Assignees: Iristrac, LLC, SRI InternationalInventors: James Russell Bergen, Oleg Naroditsky
-
Patent number: 8755607Abstract: A method of normalizing a digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: determining a pupil region in the image as a convex region having a boundary that can only be described by more than five independent parameters; determining, in the image, an outer boundary of the iris; and transforming an image of a ring shaped iris region that surrounds the pupil region into a coordinate system in which each point of the iris region is described by a first coordinate that indicates the position of the point along the boundary of the pupil and a second coordinate that indicates the distance of the point from said boundary, said second coordinate having a constant value when the point is located on the outer boundary of the iris.Type: GrantFiled: April 28, 2011Date of Patent: June 17, 2014Assignees: SRI International, Iristrac, LLCInventors: James Russell Bergen, Oleg Naroditsky
-
Patent number: 8639058Abstract: The present invention pertains to a method of generating a normalized digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: capturing one or more digital images of the eye with a camera; constructing a plurality of imaginary outer iris boundaries in the one or more digital images, based on a known dimension of the outer iris boundary of the eye of a given species of vertebrates; and using said imaginary boundaries for transforming the one or more digital images into a plurality of normalized iris image that are insensitive to variations in a dimension of a pupil of the eye.Type: GrantFiled: April 28, 2011Date of Patent: January 28, 2014Assignee: SRI InternationalInventors: James Russell Bergen, Oleg Naroditsky
-
Publication number: 20120274756Abstract: A method of capturing image data for iris code based identification of vertebrates, including humans, comprises the steps of: recording a digital image of an eye with a camera equipped with at least two light sources that have a fixed spatial relationship to an object lens of the camera; locating the eye in the digital image by detecting a specularity pattern that is created by reflection of light from said at least two light sources at a cornea of the eye; and calculating information on the position of the camera relative to the eye on the basis of said fixed spatial relationship between the light sources and the object lens and on the basis of said specularity pattern.Type: ApplicationFiled: April 28, 2011Publication date: November 1, 2012Inventors: James Russell Bergen, Oleg Naroditsky
-
Publication number: 20120275707Abstract: A method of normalizing a digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: determining a pupil region in the image as a convex region having a boundary that can only be described by more than five independent parameters; determining, in the image, an outer boundary of the iris; and transforming an image of a ring shaped iris region that surrounds the pupil region into a coordinate system in which each point of the iris region is described by a first coordinate that indicates the position of the point along the boundary of the pupil and a second coordinate that indicates the distance of the point from said boundary, said second coordinate having a constant value when the point is located on the outer boundary of the iris.Type: ApplicationFiled: April 28, 2011Publication date: November 1, 2012Inventors: James Russell BERGEN, Oleg Naroditsky
-
Publication number: 20120275665Abstract: The present invention pertains to a method of generating a normalized digital image of an iris of an eye for the purpose of creating an iris code for identification of vertebrates, including humans, the method comprising the steps of: capturing one or more digital images of the eye with a camera; constructing a plurality of imaginary outer iris boundaries in the one or more digital images, based on a known dimension of the outer iris boundary of the eye of a given species of vertebrates; and using said imaginary boundaries for transforming the one or more digital images into a plurality of normalized iris image that are insensitive to variations in a dimension of a pupil of the eye.Type: ApplicationFiled: April 28, 2011Publication date: November 1, 2012Inventors: James Russell BERGEN, Oleg Naroditsky
-
Publication number: 20120206596Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: April 19, 2012Publication date: August 16, 2012Applicant: SRI INTERNATIONALInventors: SUPUN SAMARASEKERA, RAKESH KUMAR, TARAGAY OSKIPER, ZHIWEI ZHU, OLEG NARODITSKY, HARPREET SAWHNEY
-
Patent number: 8174568Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: GrantFiled: December 3, 2007Date of Patent: May 8, 2012Assignee: SRI InternationalInventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
-
Patent number: 7925049Abstract: A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images.Type: GrantFiled: August 3, 2007Date of Patent: April 12, 2011Assignee: SRI InternationalInventors: Zhiwei Zhu, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Harpreet Singh Sawhney, Rakesh Kumar
-
Publication number: 20100103196Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.Type: ApplicationFiled: October 27, 2009Publication date: April 29, 2010Inventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
-
Patent number: 7657127Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.Type: GrantFiled: April 24, 2009Date of Patent: February 2, 2010Assignee: Sarnoff CorporationInventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
-
Publication number: 20090232418Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.Type: ApplicationFiled: April 24, 2009Publication date: September 17, 2009Inventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
-
Patent number: 7542628Abstract: A method and apparatus for strobed image capture includes stroboscopic illumination synchronized with one or more cameras to improve a signal to noise ratio, reduce motion blur and avoid object damage in sensor systems used to analyze illumination sensitive objects.Type: GrantFiled: January 19, 2006Date of Patent: June 2, 2009Assignee: Sarnoff CorporationInventors: Dominick Lolacono, James R. Matey, Oleg Naroditsky, Michael Tinker, Thomas Zappia
-
Publication number: 20080167814Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.Type: ApplicationFiled: December 3, 2007Publication date: July 10, 2008Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney