Patents by Inventor Hayk Martirosyan
Hayk Martirosyan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220309687Abstract: Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.Type: ApplicationFiled: April 4, 2022Publication date: September 29, 2022Applicant: Skydio, Inc.Inventors: Saumitro Dasgupta, Hayk Martirosyan, Hema Koppula, Alex Kendall, Austin Stone, Matthew Donahoe, Abraham Galton Bachrach, Adam Parker Bry
-
Patent number: 11455894Abstract: Described herein are systems and methods for structure scan using an unmanned aerial vehicle. For example, some methods include accessing a three-dimensional map of a structure; generating facets based on the three-dimensional map, wherein the facets are respectively a polygon on a plane in three-dimensional space that is fit to a subset of the points in the three-dimensional map; generating a scan plan based on the facets, wherein the scan plan includes a sequence of poses for an unmanned aerial vehicle to assume to enable capture, using image sensors of the unmanned aerial vehicle, of images of the structure; causing the unmanned aerial vehicle to fly to assume a pose corresponding to one of the sequence of poses of the scan plan; and capturing one or more images of the structure from the pose.Type: GrantFiled: June 8, 2020Date of Patent: September 27, 2022Assignee: Skydio, Inc.Inventors: Peter Henry, Jack Zhu, Brian Richman, Harrison Zheng, Hayk Martirosyan, Matthew Donahoe, Abraham Galton Bachrach, Adam Bry
-
Patent number: 11455895Abstract: Described herein are systems for roof scan using an unmanned aerial vehicle. For example, some methods include capturing, using an unmanned aerial vehicle, an overview image of a roof of a building from above the roof; presenting a suggested bounding polygon overlaid on the overview image to a user; determining a bounding polygon based on the suggested bounding polygon and user edits; based on the bounding polygon, determining a flight path including a sequence of poses of the unmanned aerial vehicle with respective fields of view at a fixed height that collectively cover the bounding polygon; fly the unmanned aerial vehicle to a sequence of scan poses with horizontal positions matching respective poses of the flight path and vertical positions determined to maintain a consistent distance above the roof; and scanning the roof from the sequence of scan poses to generate a three-dimensional map of the roof.Type: GrantFiled: August 6, 2020Date of Patent: September 27, 2022Assignee: Skydio, Inc.Inventors: Peter Henry, Jack Zhu, Brian Richman, Harrison Zheng, Hayk Martirosyan, Matthew Donahoe, Abraham Galton Bachrach, Adam Bry
-
Publication number: 20220234733Abstract: A technique is introduced for autonomous landing by an aerial vehicle. In some embodiments, the introduced technique includes processing a sensor data such as images captured by onboard cameras to generate a ground map comprising multiple cells. A suitable footprint, comprising a subset of the multiple cells in the ground map that satisfy one or more landing criteria, is selected and control commands are generated to cause the aerial vehicle to autonomously land on an area corresponding to the footprint. In some embodiments, the introduced technique involves a geometric smart landing process to select a relatively flat area on the ground for landing. In some embodiments, the introduced technique involves a semantic smart landing process where semantic information regarding detected objects is incorporated into the ground map.Type: ApplicationFiled: February 7, 2022Publication date: July 28, 2022Applicant: Skydio, Inc.Inventors: Kristen Marie Holtz, Hayk Martirosyan, Jack Louis Zhu, Adam Parker Bry, Matthew Joseph Donahoe, Abraham Galton Bachrach, Peter Benjamin Henry, Ryan David Kennedy
-
Patent number: 11347244Abstract: An autonomous vehicle that is equipped with image capture devices can use information gathered from the image capture devices to plan a future three-dimensional (3D) trajectory through a physical environment. To this end, a technique is described for image-space based motion planning. In an embodiment, a planned 3D trajectory is projected into an image-space of an image captured by the autonomous vehicle. The planned 3D trajectory is then optimized according to a cost function derived from information (e.g., depth estimates) in the captured image. The cost function associates higher cost values with identified regions of the captured image that are associated with areas of the physical environment into which travel is risky or otherwise undesirable. The autonomous vehicle is thereby encouraged to avoid these areas while satisfying other motion planning objectives.Type: GrantFiled: February 12, 2020Date of Patent: May 31, 2022Assignee: Skydio, Inc.Inventors: Ryan David Kennedy, Peter Benjamin Henry, Hayk Martirosyan, Jack Louis Zhu, Abraham Galton Bachrach, Adam Parker Bry
-
Publication number: 20220120918Abstract: In some examples, an unmanned aerial vehicle (UAV) may receive location information via the global navigation satellite system (GNSS) receiver and may receive acceleration information via an onboard accelerometer. The UAV may determine a first measurement of acceleration of the UAV in a navigation frame of reference based on information from the accelerometer prior to or during takeoff. In addition, the UAV may determine a second measurement of acceleration of the UAV in a world frame of reference based on the location information received via the GNSS receiver prior to or during takeoff. The UAV may determine a relative heading of the UAV based on the first and second acceleration measurements. The determined relative heading may be used for navigation of the UAV at least one of during or after takeoff of the UAV.Type: ApplicationFiled: October 15, 2020Publication date: April 21, 2022Inventors: Anurag MAKINENI, Kristen Marie HOLTZ, Gareth Benoit CROSS, Hayk MARTIROSYAN
-
Patent number: 11295458Abstract: Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.Type: GrantFiled: November 30, 2017Date of Patent: April 5, 2022Assignee: Skydio, Inc.Inventors: Saumitro Dasgupta, Hayk Martirosyan, Hema Koppula, Alex Kendall, Austin Stone, Matthew Donahoe, Abraham Galton Bachrach, Adam Parker Bry
-
Publication number: 20220057799Abstract: Methods and systems are disclosed for an unmanned aerial vehicle (UAV) configured to autonomously navigate a physical environment while capturing images of the physical environment. In some embodiments, the motion of the UAV and a subject in the physical environment may be estimated based in part on images of the physical environment captured by the UAV. In response to estimating the motions, image capture by the UAV may be dynamically adjusted to satisfy a specified criterion related to a quality of the image capture.Type: ApplicationFiled: July 29, 2021Publication date: February 24, 2022Applicant: Skydio, Inc.Inventors: Hayk Martirosyan, Adam Bry, Matthew Donahoe, Abraham Bachrach, Justin Michael Sadowski
-
Publication number: 20220050478Abstract: An autonomous vehicle that is equipped with image capture devices can use information gathered from the image capture devices to plan a future three-dimensional (3D) trajectory through a physical environment. To this end, a technique is described for image-space based motion planning. In an embodiment, a planned 3D trajectory is projected into an image-space of an image captured by the autonomous vehicle. The planned 3D trajectory is then optimized according to a cost function derived from information (e.g., depth estimates) in the captured image. The cost function associates higher cost values with identified regions of the captured image that are associated with areas of the physical environment into which travel is risky or otherwise undesirable. The autonomous vehicle is thereby encouraged to avoid these areas while satisfying other motion planning objectives.Type: ApplicationFiled: October 28, 2021Publication date: February 17, 2022Applicant: Skydio, Inc.Inventors: Ryan David Kennedy, Peter Benjamin Henry, Hayk Martirosyan, Jack Louis Zhu, Abraham Galton Bachrach, Adam Parker Bry
-
Publication number: 20220050477Abstract: An autonomous vehicle that is equipped with image capture devices can use information gathered from the image capture devices to plan a future three-dimensional (3D) trajectory through a physical environment. To this end, a technique is described for image-space based motion planning. In an embodiment, a planned 3D trajectory is projected into an image-space of an image captured by the autonomous vehicle. The planned 3D trajectory is then optimized according to a cost function derived from information (e.g., depth estimates) in the captured image. The cost function associates higher cost values with identified regions of the captured image that are associated with areas of the physical environment into which travel is risky or otherwise undesirable. The autonomous vehicle is thereby encouraged to avoid these areas while satisfying other motion planning objectives.Type: ApplicationFiled: October 28, 2021Publication date: February 17, 2022Applicant: Skydio, Inc.Inventors: Ryan David Kennedy, Peter Benjamin Henry, Hayk Martirosyan, Jack Louis Zhu, Abraham Galton Bachrach, Adam Parker Bry
-
Patent number: 11242144Abstract: A technique is introduced for autonomous landing by an aerial vehicle. In some embodiments, the introduced technique includes processing a sensor data such as images captured by onboard cameras to generate a ground map comprising multiple cells. A suitable footprint, comprising a subset of the multiple cells in the ground map that satisfy one or more landing criteria, is selected and control commands are generated to cause the aerial vehicle to autonomously land on an area corresponding to the footprint. In some embodiments, the introduced technique involves a geometric smart landing process to select a relatively flat area on the ground for landing. In some embodiments, the introduced technique involves a semantic smart landing process where semantic information regarding detected objects is incorporated into the ground map.Type: GrantFiled: February 11, 2019Date of Patent: February 8, 2022Assignee: Skydio, Inc.Inventors: Kristen Marie Holtz, Hayk Martirosyan, Jack Louis Zhu, Adam Parker Bry, Matthew Joseph Donahoe, Abraham Galton Bachrach, Peter Benjamin Henry, Ryan David Kennedy
-
Publication number: 20220019248Abstract: A technique is described for controlling an autonomous vehicle such as an unmanned aerial vehicle (UAV) using objective-based inputs. In an embodiment, the underlying functionality of an autonomous navigation system is via an application programming interface (API). In such an embodiment, the UAV can be controlled trough specifying a behavioral objective, for example, using a call to the API to set parameters for the behavioral objective. The autonomous navigation system can then incorporate perception inputs such as sensor data from sensors mounted to the UAV and the set parameters using a multi-objective motion planning process to generate a proposed trajectory that most closely satisfies the behavioral objective in view of certain constraints. In some embodiments, developers can utilize the API to build customized applications for utilizing the UAV to capture images.Type: ApplicationFiled: June 28, 2021Publication date: January 20, 2022Applicant: Skydio, Inc.Inventors: Jack Louis Zhu, Hayk Martirosyan, Abraham Bachrach, Matthew Donahoe, Patrick Lowe, Kristen Marie Holtz, Adam Bry
-
Publication number: 20220014675Abstract: In some examples, an unmanned aerial vehicle (UAV) may control a position of a first camera to cause the first camera to capture a first image of a target. The UAV may receive a plurality of second images from a plurality of second cameras, the plurality of second cameras positioned on the UAV for providing a plurality of different fields of view in a plurality of different directions around the UAV, the first camera having a longer focal length than the second cameras. The UAV may combine at least some of the plurality of second images to generate a composite image corresponding to the first image and having a wider-angle field of view than the first image. The UAV may send the first image and the composite image to a computing device.Type: ApplicationFiled: July 12, 2021Publication date: January 13, 2022Inventors: Peter Benjamin HENRY, Hayk MARTIROSYAN, Abraham Galton BACHRACH, Clement GODARD, Adam Parker BRY, Ryan David KENNEDY
-
Patent number: 11126182Abstract: Methods and systems are disclosed for an unmanned aerial vehicle (UAV) configured to autonomously navigate a physical environment while capturing images of the physical environment. In some embodiments, the motion of the UAV and a subject in the physical environment may be estimated based in part on images of the physical environment captured by the UAV. In response to estimating the motions, image capture by the UAV may be dynamically adjusted to satisfy a specified criterion related to a quality of the image capture.Type: GrantFiled: December 20, 2019Date of Patent: September 21, 2021Assignee: Skydio, Inc.Inventors: Hayk Martirosyan, Adam Bry, Matthew Donahoe, Abraham Bachrach, Justin Michael Sadowski
-
Publication number: 20210271264Abstract: A technique is introduced for touchdown detection during autonomous landing by an aerial vehicle. In some embodiments, the introduced technique includes processing perception inputs with a dynamics model of the aerial vehicle to estimate the external forces and/or torques acting on the aerial vehicle. The estimated external forces and/or torques are continually monitored while the aerial vehicle is landing to determine when the aerial vehicle is sufficiently supported by a landing surface. In some embodiments, semantic information associated with objects in the environment is utilized to configure parameters associated with the touchdown detection process.Type: ApplicationFiled: May 3, 2021Publication date: September 2, 2021Applicant: Skydio, Inc.Inventors: Rowland Wilde O'Flaherty, Teodor Tomic, Hayk Martirosyan, Abraham Galton Bachrach, Kristen Marie Holtz, Jack Louis Zhu
-
Publication number: 20210263488Abstract: In some examples, an unmanned aerial vehicle (UAV) may determine, based on a three-dimensional (3D) model including a plurality of points corresponding to a scan target, a scan plan for scanning at least a portion of the scan target. For instance, the scan plan may include a plurality of poses for the UAV to assume to capture images of the scan target. The UAV may capture with one or more image sensors, one or more images of the scan target from one or more poses of the plurality of poses. Further, the UAV may determine an update to the 3D model based at least in part on the one or more images. Additionally, the UAV may update the scan plan based at least in part on the update to the 3D model.Type: ApplicationFiled: February 12, 2021Publication date: August 26, 2021Inventors: Peter HENRY, Jack ZHU, Brian RICHMAN, Harrison ZHENG, Hayk MARTIROSYAN, Matthew DONAHOE, Abraham BACHRACH, Adam BRY, Ryan David KENNEDY, Himel MONDAL, Quentin Allen Wah Yen DELEPINE
-
Publication number: 20210263515Abstract: In some examples, an unmanned aerial vehicle (UAV) employs one or more image sensors to capture images of a scan target and may use distance information from the images for determining respective locations in three-dimensional (3D) space of a plurality of points of a 3D model representative of a surface of the scan target. The UAV may compare a first image with a second image to determine a difference between a current frame of reference position for the UAV and an estimate of an actual frame of reference position for the UAV. Further, based at least on the difference, the UAV may determine, while the UAV is in flight, an update to the 3D model including at least one of an updated location of at least one point in the 3D model, or a location of a new point in the 3D model.Type: ApplicationFiled: February 12, 2021Publication date: August 26, 2021Inventors: Peter HENRY, Jack ZHU, Brian RICHMAN, Harrison ZHENG, Hayk MARTIROSYAN, Matthew DONAHOE, Abraham BACHRACH, Adam BRY, Ryan David KENNEDY, Himel MONDAL, Quentin Allen Wah Yen DELEPINE
-
Patent number: 11048277Abstract: A technique is described for controlling an autonomous vehicle such as an unmanned aerial vehicle (UAV) using objective-based inputs. In an embodiment, the underlying functionality of an autonomous navigation system is via an application programming interface (API). In such an embodiment, the UAV can be controlled trough specifying a behavioral objective, for example, using a call to the API to set parameters for the behavioral objective. The autonomous navigation system can then incorporate perception inputs such as sensor data from sensors mounted to the UAV and the set parameters using a multi-objective motion planning process to generate a proposed trajectory that most closely satisfies the behavioral objective in view of certain constraints. In some embodiments, developers can utilize the API to build customized applications for utilizing the UAV to capture images.Type: GrantFiled: January 4, 2019Date of Patent: June 29, 2021Assignee: Skydio, Inc.Inventors: Jack Louis Zhu, Hayk Martirosyan, Abraham Bachrach, Matthew Donahoe, Patrick Lowe, Kristen Marie Holtz, Adam Bry
-
Patent number: 10996683Abstract: A technique is introduced for touchdown detection during autonomous landing by an aerial vehicle. In some embodiments, the introduced technique includes processing perception inputs with a dynamics model of the aerial vehicle to estimate the external forces and/or torques acting on the aerial vehicle. The estimated external forces and/or torques are continually monitored while the aerial vehicle is landing to determine when the aerial vehicle is sufficiently supported by a landing surface. In some embodiments, semantic information associated with objects in the environment is utilized to configure parameters associated with the touchdown detection process.Type: GrantFiled: February 11, 2019Date of Patent: May 4, 2021Assignee: Skydio, Inc.Inventors: Rowland Wilde O'Flaherty, Teodor Tomic, Hayk Martirosyan, Abraham Galton Bachrach, Kristen Marie Holtz, Jack Louis Zhu
-
Publication number: 20210125406Abstract: Described herein are systems and methods for structure scan using an unmanned aerial vehicle. For example, some methods include accessing a three-dimensional map of a structure; generating facets based on the three-dimensional map, wherein the facets are respectively a polygon on a plane in three-dimensional space that is fit to a subset of the points in the three-dimensional map; generating a scan plan based on the facets, wherein the scan plan includes a sequence of poses for an unmanned aerial vehicle to assume to enable capture, using image sensors of the unmanned aerial vehicle, of images of the structure; causing the unmanned aerial vehicle to fly to assume a pose corresponding to one of the sequence of poses of the scan plan; and capturing one or more images of the structure from the pose.Type: ApplicationFiled: June 8, 2020Publication date: April 29, 2021Inventors: Peter Henry, Jack Zhu, Brian Richman, Harrison Zheng, Hayk Martirosyan, Matthew Donahoe, Abraham Galton Bachrach, Adam Bry