Patents by Inventor Sara Susca

Sara Susca has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9547910
    Abstract: A navigation system comprises an image sensor operable to obtain a first image at a first moment in time and a second image at a second moment in time; an inertial measurement unit (IMU) operable to obtain measurement data corresponding to the first and second moments in time; and a processing unit coupled to the image sensor and the IMU. The processing unit is operable to estimate motion between the first and second moments in time based on the measurement data from the IMU; calculate a plurality of transformations based on the estimated motion; apply each of the plurality of transformations to the first image to produce a plurality of predicted images; compare the second image to each of the plurality of predicted images; select the predicted image from the plurality of predicted images which most closely matches the second image; and compensate for error in the IMU measurement data based on the transformation corresponding to the selected predicted image.
    Type: Grant
    Filed: March 4, 2010
    Date of Patent: January 17, 2017
    Assignee: Honeywell International Inc.
    Inventors: Rida Hamza, Sara Susca
  • Patent number: 8948446
    Abstract: A method comprises receiving a first frame from at least one imaging device, receiving a second frame from the at least one imaging device, analyzing at least a portion of the first frame and at least a portion of the second frame, and indicating when at least one of a zero velocity update and a zero attitude update should be performed based on at least in part on the analysis of the at least a portion of the first frame and the at least a portion of the second frame. The first frame is captured at a first vantage point and the second frame is captured at a second vantage point.
    Type: Grant
    Filed: January 19, 2011
    Date of Patent: February 3, 2015
    Assignee: Honeywell International Inc.
    Inventors: Sara Susca, Viswanath Talasila, Shrikant Rao
  • Patent number: 8723987
    Abstract: In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: May 13, 2014
    Assignee: Honeywell International Inc.
    Inventors: Kailash Krishnaswamy, Sara Susca
  • Patent number: 8660365
    Abstract: Systems and methods for processing extracted plane features are provided. In one embodiment, a method for processing extracted plane features includes: estimating an area of each plane of a plurality of planes extracted from data collected by an imaging sensor; generating a list of detected planes including the area of each plane; filtering the list of detected planes to produce a list of candidates for merger, filtering the list of detected planes discarding any plane not satisfying an actual points received criteria; applying a primary merge algorithm to the list of candidates for merger that iteratively produces a list of merged planes by testing hypothetical merged planes against a merging criteria, the hypothetical merged planes each comprising a plane from the list of merged planes and a plane from the list of candidates for merger; and outputting a final list of planes.
    Type: Grant
    Filed: July 29, 2010
    Date of Patent: February 25, 2014
    Assignee: Honeywell International Inc.
    Inventors: Jan Lukas, Sara Susca, Ondrej Kotaba
  • Patent number: 8467612
    Abstract: A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: June 18, 2013
    Assignee: Honeywell International Inc.
    Inventors: Sara Susca, Kailash Krishnaswamy
  • Patent number: 8238612
    Abstract: A method for determining motion is provided. The method determines a rotation of an object from a first time to a second time by analyzing a first 2D image obtained at the first time and a second 2D image obtained at the second time. Then, the method determines a translation of the object from the first time to the second time based on the determined rotation, 3D information relating to the first image, and 3D information relating to the second image.
    Type: Grant
    Filed: August 28, 2008
    Date of Patent: August 7, 2012
    Assignee: Honeywell International Inc.
    Inventors: Sara Susca, Kailash Krishnaswamy
  • Publication number: 20120183179
    Abstract: A method comprises receiving a first frame from at least one imaging device, receiving a second frame from the at least one imaging device, analyzing at least a portion of the first frame and at least a portion of the second frame, and indicating when at least one of a zero velocity update and a zero attitude update should be performed based on at least in part on the analysis of the at least a portion of the first frame and the at least a portion of the second frame. The first frame is captured at a first vantage point and the second frame is captured at a second vantage point.
    Type: Application
    Filed: January 19, 2011
    Publication date: July 19, 2012
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Sara Susca, Viswanath Talasila, Shrikant Rao
  • Patent number: 8213706
    Abstract: A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique.
    Type: Grant
    Filed: April 22, 2008
    Date of Patent: July 3, 2012
    Assignee: Honeywell International Inc.
    Inventors: Kailash Krishnaswamy, Sara Susca, Robert C. McCroskey
  • Publication number: 20120027310
    Abstract: Systems and methods for processing extracted plane features are provided. In one embodiment, a method for processing extracted plane features includes: estimating an area of each plane of a plurality of planes extracted from data collected by an imaging sensor; generating a list of detected planes including the area of each plane; filtering the list of detected planes to produce a list of candidates for merger, filtering the list of detected planes discarding any plane not satisfying an actual points received criteria; applying a primary merge algorithm to the list of candidates for merger that iteratively produces a list of merged planes by testing hypothetical merged planes against a merging criteria, the hypothetical merged planes each comprising a plane from the list of merged planes and a plane from the list of candidates for merger; and outputting a final list of planes.
    Type: Application
    Filed: July 29, 2010
    Publication date: February 2, 2012
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Jan Lukas, Sara Susca, Ondrej Kotaba
  • Patent number: 8103056
    Abstract: A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location.
    Type: Grant
    Filed: October 15, 2008
    Date of Patent: January 24, 2012
    Assignee: Honeywell International Inc.
    Inventors: Kailash Krishnaswamy, Roland Miezianko, Sara Susca
  • Publication number: 20110218733
    Abstract: A navigation system comprises an image sensor operable to obtain a first image at a first moment in time and a second image at a second moment in time; an inertial measurement unit (IMU) operable to obtain measurement data corresponding to the first and second moments in time; and a processing unit coupled to the image sensor and the IMU. The processing unit is operable to estimate motion between the first and second moments in time based on the measurement data from the IMU; calculate a plurality of transformations based on the estimated motion; apply each of the plurality of transformations to the first image to produce a plurality of predicted images; compare the second image to each of the plurality of predicted images; select the predicted image from the plurality of predicted images which most closely matches the second image; and compensate for error in the IMU measurement data based on the transformation corresponding to the selected predicted image.
    Type: Application
    Filed: March 4, 2010
    Publication date: September 8, 2011
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Rida Hamza, Sara Susca
  • Publication number: 20110102545
    Abstract: In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane.
    Type: Application
    Filed: October 30, 2009
    Publication date: May 5, 2011
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Kailash Krishnaswamy, Sara Susca
  • Publication number: 20100092071
    Abstract: A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image.
    Type: Application
    Filed: October 13, 2008
    Publication date: April 15, 2010
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Sara Susca, Kailash Krishnaswamy
  • Publication number: 20100092033
    Abstract: A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location.
    Type: Application
    Filed: October 15, 2008
    Publication date: April 15, 2010
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Kailash Krishnaswamy, Roland Miezianko, Sara Susca
  • Publication number: 20090279741
    Abstract: A method for determining motion is provided. The method determines a rotation of an object from a first time to a second time by analyzing a first 2D image obtained at the first time and a second 2D image obtained at the second time. Then, the method determines a translation of the object from the first time to the second time based on the determined rotation, 3D information relating to the first image, and 3D information relating to the second image.
    Type: Application
    Filed: August 28, 2008
    Publication date: November 12, 2009
    Applicant: HONEYWELL
    Inventors: Sara Susca, Kailash Krishnaswamy
  • Publication number: 20090263009
    Abstract: A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique.
    Type: Application
    Filed: April 22, 2008
    Publication date: October 22, 2009
    Applicant: Honeywell International Inc.
    Inventors: Kailash Krishnaswamy, Sara Susca, Robert C. McCroskey