Patents by Inventor Phillip Haeusler
Phillip Haeusler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11927438Abstract: In an example, a method for calibrating fiducials relative to a workpiece is disclosed and involves receiving, from one or more cameras, images of the fiducials, each image acquired from a different camera pose, where the fiducials comprise (i) one or more first fiducials fixed at one or more unknown locations on the workpiece and (ii) calibration fiducials disposed at known locations on the workpiece; estimating poses of the fiducials, where estimating the poses of the fiducials comprises, in each image, estimating, for each fiducial visible in the image, a pose of the fiducial relative to the camera(s); formulating a pose graph comprising nodes for the images and for the fiducials, the poses estimated for the fiducials, the calibration fiducials as landmarks with known poses, and constraints between the calibration fiducials and the images; and solving the pose graph to determine poses of the first fiducial(s) that satisfy the constraints.Type: GrantFiled: January 2, 2020Date of Patent: March 12, 2024Assignee: The Boeing CompanyInventors: Phillip Haeusler, Alexandre Desbiez
-
Patent number: 11908164Abstract: Provided are systems and methods for auto calibrating a vehicle using a calibration target that is generated from the vehicle's sensor data. In one example, the method may include receiving sensor data associated with a road captured by one or more sensors of a vehicle, identifying lane line data points within the sensor data, generating a representation which includes positions of a plurality of lane lines of the road based on the identified lane line data points, and adjusting a calibration parameter of a sensor from among the one or more sensors of the vehicle based on the representation of the plurality of lane lines.Type: GrantFiled: July 23, 2021Date of Patent: February 20, 2024Assignee: EMBARK TRUCKS INC.Inventor: Phillip Haeusler
-
Patent number: 11721043Abstract: Provided are systems and methods for auto calibrating a vehicle using a calibration target that is generated from the vehicle's sensor data. In one example, the method may include receiving sensor data associated with a road captured by one or more sensors of a vehicle, identifying lane line data points within the sensor data, generating a representation which includes positions of a plurality of lane lines of the road based on the identified lane line data points, and adjusting a calibration parameter of a sensor from among the one or more sensors of the vehicle based on the representation of the plurality of lane lines.Type: GrantFiled: February 18, 2022Date of Patent: August 8, 2023Assignee: Embark Trucks Inc.Inventor: Phillip Haeusler
-
Publication number: 20230028484Abstract: Provided are systems and methods for auto calibrating a vehicle using a calibration target that is generated from the vehicle's sensor data. In one example, the method may include receiving sensor data associated with a road captured by one or more sensors of a vehicle, identifying lane line data points within the sensor data, generating a representation which includes positions of a plurality of lane lines of the road based on the identified lane line data points, and adjusting a calibration parameter of a sensor from among the one or more sensors of the vehicle based on the representation of the plurality of lane lines.Type: ApplicationFiled: July 23, 2021Publication date: January 26, 2023Inventor: Phillip Haeusler
-
Publication number: 20230028919Abstract: Provided are systems and methods for auto calibrating a vehicle using a calibration target that is generated from the vehicle's sensor data. In one example, the method may include receiving sensor data associated with a road captured by one or more sensors of a vehicle, identifying lane line data points within the sensor data, generating a representation which includes positions of a plurality of lane lines of the road based on the identified lane line data points, and adjusting a calibration parameter of a sensor from among the one or more sensors of the vehicle based on the representation of the plurality of lane lines.Type: ApplicationFiled: February 18, 2022Publication date: January 26, 2023Inventor: Phillip Haeusler
-
Patent number: 11254019Abstract: Systems and methods are provided for automatic intrinsic and extrinsic calibration for a robot optical sensor. An implementation includes an optical sensor; a robot arm; a calibration chart; one or more processors; and a memory storing instructions that cause the one or more processors to perform operations that includes: determining a set of poses for calibrating the first optical sensor; generating, based at least on the set of poses, pose data comprising three dimensional (3D) position and orientation data; moving, based at least on the pose data, the robot arm into a plurality of poses; at each pose of the plurality of poses, capturing a set of images of the calibration chart with the first optical sensor and recording a pose; calculating intrinsic calibration parameters, based at least on the set of captured images; and calculating extrinsic calibration parameters, based at least on the set of captured images.Type: GrantFiled: March 5, 2019Date of Patent: February 22, 2022Assignee: The Boeing CompanyInventors: Phillip Haeusler, Jason John Cochrane
-
Patent number: 11080892Abstract: A method of localizing an object includes capturing a first image of the object from a first camera position, performing edge detection on the first image to form an edge-detected second image, and performing a distance transform on the second image to form a distance transformed third image. A virtual camera is positioned in virtual space relative to a virtual model of the object, and an edge-detected fourth image of the virtual model of the object is generated. An alignment cost is generated of the edge-detected fourth image of the virtual model relative to the edge-detected second image of the object, and a position of the virtual camera is transformed to a position having a lower alignment cost than the alignment cost. A physical location of the object is estimated based on the transformed virtual camera position.Type: GrantFiled: October 7, 2019Date of Patent: August 3, 2021Assignee: The Boeing CompanyInventor: Phillip Haeusler
-
Publication number: 20210209407Abstract: In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.Type: ApplicationFiled: January 2, 2020Publication date: July 8, 2021Inventors: Phillip Haeusler, Alexandre Desbiez
-
Publication number: 20210207957Abstract: In an example, a method for calibrating fiducials relative to a workpiece is disclosed and involves receiving, from one or more cameras, images of the fiducials, each image acquired from a different camera pose, where the fiducials comprise (i) one or more first fiducials fixed at one or more unknown locations on the workpiece and (ii) calibration fiducials disposed at known locations on the workpiece; estimating poses of the fiducials, where estimating the poses of the fiducials comprises, in each image, estimating, for each fiducial visible in the image, a pose of the fiducial relative to the camera(s); formulating a pose graph comprising nodes for the images and for the fiducials, the poses estimated for the fiducials, the calibration fiducials as landmarks with known poses, and constraints between the calibration fiducials and the images; and solving the pose graph to determine poses of the first fiducial(s) that satisfy the constraints.Type: ApplicationFiled: January 2, 2020Publication date: July 8, 2021Inventors: Phillip Haeusler, Alexandre Desbiez
-
Patent number: 11055562Abstract: In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.Type: GrantFiled: January 2, 2020Date of Patent: July 6, 2021Assignee: The Boeing CompanyInventors: Phillip Haeusler, Alexandre Desbiez
-
Publication number: 20210104066Abstract: A method of localizing an object includes capturing a first image of the object from a first camera position, performing edge detection on the first image to form an edge-detected second image, and performing a distance transform on the second image to form a distance transformed third image. A virtual camera is positioned in virtual space relative to a virtual model of the object, and an edge-detected fourth image of the virtual model of the object is generated. An alignment cost is generated of the edge-detected fourth image of the virtual model relative to the edge-detected second image of the object, and a position of the virtual camera is transformed to a position having a lower alignment cost than the alignment cost. A physical location of the object is estimated based on the transformed virtual camera position.Type: ApplicationFiled: October 7, 2019Publication date: April 8, 2021Inventor: Phillip Haeusler
-
Publication number: 20200282575Abstract: Systems and methods are provided for automatic intrinsic and extrinsic calibration for a robot optical sensor. An implementation includes an optical sensor; a robot arm; a calibration chart; one or more processors; and a memory storing instructions that cause the one or more processors to perform operations that includes: determining a set of poses for calibrating the first optical sensor; generating, based at least on the set of poses, pose data comprising three dimensional (3D) position and orientation data; moving, based at least on the pose data, the robot arm into a plurality of poses; at each pose of the plurality of poses, capturing a set of images of the calibration chart with the first optical sensor and recording a pose; calculating intrinsic calibration parameters, based at least on the set of captured images; and calculating extrinsic calibration parameters, based at least on the set of captured images.Type: ApplicationFiled: March 5, 2019Publication date: September 10, 2020Inventors: Phillip Haeusler, Jason John Cochrane
-
Patent number: 10445873Abstract: A method and apparatus for performing an automated validation of a condition of assembly for a structure. A plurality of images of the structure are registered to a computer model of the structure in which an image in the plurality of images captures a portion of the structure. Each image in the plurality of images is segmented based on registration of the plurality of images to the computer model to form a plurality of image sections. A final score is generated for the condition of assembly of the structure based on whether each image section in the plurality of image sections meets a corresponding condition in which the final score indicates whether the condition of assembly is valid.Type: GrantFiled: February 23, 2017Date of Patent: October 15, 2019Assignee: The Boeing CompanyInventors: Martin Szarski, Phillip Haeusler, David Michael Bain, Richard Bain, Andrew K. Glynn, Peter Nathan Steele
-
Publication number: 20180240226Abstract: A method and apparatus for performing an automated validation of a condition of assembly for a structure. A plurality of images of the structure are registered to a computer model of the structure in which an image in the plurality of images captures a portion of the structure. Each image in the plurality of images is segmented based on registration of the plurality of images to the computer model to form a plurality of image sections. A final score is generated for the condition of assembly of the structure based on whether each image section in the plurality of image sections meets a corresponding condition in which the final score indicates whether the condition of assembly is valid.Type: ApplicationFiled: February 23, 2017Publication date: August 23, 2018Inventors: Martin Szarski, Phillip Haeusler, David Michael Bain, Richard Bain, Andrew K. Glynn, Peter Nathan Steele