Patents by Inventor Sara Susca
Sara Susca has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9547910Abstract: A navigation system comprises an image sensor operable to obtain a first image at a first moment in time and a second image at a second moment in time; an inertial measurement unit (IMU) operable to obtain measurement data corresponding to the first and second moments in time; and a processing unit coupled to the image sensor and the IMU. The processing unit is operable to estimate motion between the first and second moments in time based on the measurement data from the IMU; calculate a plurality of transformations based on the estimated motion; apply each of the plurality of transformations to the first image to produce a plurality of predicted images; compare the second image to each of the plurality of predicted images; select the predicted image from the plurality of predicted images which most closely matches the second image; and compensate for error in the IMU measurement data based on the transformation corresponding to the selected predicted image.Type: GrantFiled: March 4, 2010Date of Patent: January 17, 2017Assignee: Honeywell International Inc.Inventors: Rida Hamza, Sara Susca
-
Patent number: 8948446Abstract: A method comprises receiving a first frame from at least one imaging device, receiving a second frame from the at least one imaging device, analyzing at least a portion of the first frame and at least a portion of the second frame, and indicating when at least one of a zero velocity update and a zero attitude update should be performed based on at least in part on the analysis of the at least a portion of the first frame and the at least a portion of the second frame. The first frame is captured at a first vantage point and the second frame is captured at a second vantage point.Type: GrantFiled: January 19, 2011Date of Patent: February 3, 2015Assignee: Honeywell International Inc.Inventors: Sara Susca, Viswanath Talasila, Shrikant Rao
-
Patent number: 8723987Abstract: In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane.Type: GrantFiled: October 30, 2009Date of Patent: May 13, 2014Assignee: Honeywell International Inc.Inventors: Kailash Krishnaswamy, Sara Susca
-
Patent number: 8660365Abstract: Systems and methods for processing extracted plane features are provided. In one embodiment, a method for processing extracted plane features includes: estimating an area of each plane of a plurality of planes extracted from data collected by an imaging sensor; generating a list of detected planes including the area of each plane; filtering the list of detected planes to produce a list of candidates for merger, filtering the list of detected planes discarding any plane not satisfying an actual points received criteria; applying a primary merge algorithm to the list of candidates for merger that iteratively produces a list of merged planes by testing hypothetical merged planes against a merging criteria, the hypothetical merged planes each comprising a plane from the list of merged planes and a plane from the list of candidates for merger; and outputting a final list of planes.Type: GrantFiled: July 29, 2010Date of Patent: February 25, 2014Assignee: Honeywell International Inc.Inventors: Jan Lukas, Sara Susca, Ondrej Kotaba
-
Patent number: 8467612Abstract: A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image.Type: GrantFiled: October 13, 2008Date of Patent: June 18, 2013Assignee: Honeywell International Inc.Inventors: Sara Susca, Kailash Krishnaswamy
-
Patent number: 8238612Abstract: A method for determining motion is provided. The method determines a rotation of an object from a first time to a second time by analyzing a first 2D image obtained at the first time and a second 2D image obtained at the second time. Then, the method determines a translation of the object from the first time to the second time based on the determined rotation, 3D information relating to the first image, and 3D information relating to the second image.Type: GrantFiled: August 28, 2008Date of Patent: August 7, 2012Assignee: Honeywell International Inc.Inventors: Sara Susca, Kailash Krishnaswamy
-
Publication number: 20120183179Abstract: A method comprises receiving a first frame from at least one imaging device, receiving a second frame from the at least one imaging device, analyzing at least a portion of the first frame and at least a portion of the second frame, and indicating when at least one of a zero velocity update and a zero attitude update should be performed based on at least in part on the analysis of the at least a portion of the first frame and the at least a portion of the second frame. The first frame is captured at a first vantage point and the second frame is captured at a second vantage point.Type: ApplicationFiled: January 19, 2011Publication date: July 19, 2012Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Sara Susca, Viswanath Talasila, Shrikant Rao
-
Patent number: 8213706Abstract: A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique.Type: GrantFiled: April 22, 2008Date of Patent: July 3, 2012Assignee: Honeywell International Inc.Inventors: Kailash Krishnaswamy, Sara Susca, Robert C. McCroskey
-
Publication number: 20120027310Abstract: Systems and methods for processing extracted plane features are provided. In one embodiment, a method for processing extracted plane features includes: estimating an area of each plane of a plurality of planes extracted from data collected by an imaging sensor; generating a list of detected planes including the area of each plane; filtering the list of detected planes to produce a list of candidates for merger, filtering the list of detected planes discarding any plane not satisfying an actual points received criteria; applying a primary merge algorithm to the list of candidates for merger that iteratively produces a list of merged planes by testing hypothetical merged planes against a merging criteria, the hypothetical merged planes each comprising a plane from the list of merged planes and a plane from the list of candidates for merger; and outputting a final list of planes.Type: ApplicationFiled: July 29, 2010Publication date: February 2, 2012Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Jan Lukas, Sara Susca, Ondrej Kotaba
-
Patent number: 8103056Abstract: A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location.Type: GrantFiled: October 15, 2008Date of Patent: January 24, 2012Assignee: Honeywell International Inc.Inventors: Kailash Krishnaswamy, Roland Miezianko, Sara Susca
-
Publication number: 20110218733Abstract: A navigation system comprises an image sensor operable to obtain a first image at a first moment in time and a second image at a second moment in time; an inertial measurement unit (IMU) operable to obtain measurement data corresponding to the first and second moments in time; and a processing unit coupled to the image sensor and the IMU. The processing unit is operable to estimate motion between the first and second moments in time based on the measurement data from the IMU; calculate a plurality of transformations based on the estimated motion; apply each of the plurality of transformations to the first image to produce a plurality of predicted images; compare the second image to each of the plurality of predicted images; select the predicted image from the plurality of predicted images which most closely matches the second image; and compensate for error in the IMU measurement data based on the transformation corresponding to the selected predicted image.Type: ApplicationFiled: March 4, 2010Publication date: September 8, 2011Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Rida Hamza, Sara Susca
-
Publication number: 20110102545Abstract: In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane.Type: ApplicationFiled: October 30, 2009Publication date: May 5, 2011Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Kailash Krishnaswamy, Sara Susca
-
Publication number: 20100092071Abstract: A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image.Type: ApplicationFiled: October 13, 2008Publication date: April 15, 2010Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Sara Susca, Kailash Krishnaswamy
-
Publication number: 20100092033Abstract: A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location.Type: ApplicationFiled: October 15, 2008Publication date: April 15, 2010Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Kailash Krishnaswamy, Roland Miezianko, Sara Susca
-
Publication number: 20090279741Abstract: A method for determining motion is provided. The method determines a rotation of an object from a first time to a second time by analyzing a first 2D image obtained at the first time and a second 2D image obtained at the second time. Then, the method determines a translation of the object from the first time to the second time based on the determined rotation, 3D information relating to the first image, and 3D information relating to the second image.Type: ApplicationFiled: August 28, 2008Publication date: November 12, 2009Applicant: HONEYWELLInventors: Sara Susca, Kailash Krishnaswamy
-
Publication number: 20090263009Abstract: A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique.Type: ApplicationFiled: April 22, 2008Publication date: October 22, 2009Applicant: Honeywell International Inc.Inventors: Kailash Krishnaswamy, Sara Susca, Robert C. McCroskey