Patents by Inventor Jason Ziglar

Jason Ziglar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12135375
    Abstract: Systems and methods for object detection. Object detection may be used to control autonomous vehicle(s). For example, the methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by the computing device. The LiDAR dataset and image(s) are used to detect an object that is in proximity to the autonomous vehicle. The object is detected by: computing a distribution of object detections that each point of the LiDAR dataset is likely to be in; creating a plurality of segments of LiDAR data points using the distribution of object detections; and detecting the object in a point cloud defined by the LiDAR dataset based on the plurality of segments of LiDAR data points. The object detection may be used to facilitate at least one autonomous driving operation.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: November 5, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Jason Ziglar, Arsenii Saranin, Basel Alghanem, G. Peter K. Carr
  • Patent number: 12050661
    Abstract: Systems and methods for object detection. The methods include, by a computing device: obtaining a plurality of intensity values denoting at least a difference in a first location of at least one object in a first image and a second location of the at least one object in a second image; converting the intensity values to 3D position values; inputting the 3D position values into a classifier algorithm to obtain classifications for data points of a 3D point cloud (each of the classifications comprising a foreground classification or a background classification); and using the classifications to detect at least one object which is located in a foreground or a background.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: July 30, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Patent number: 12050273
    Abstract: Systems/methods for object detection. The methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by a computing device, the LiDAR dataset and at least one image to detect an object that is in proximity to the autonomous vehicle. The object is detected by: generating a pruned LiDAR dataset by reducing a total number of points contained in the LiDAR dataset; and detecting the object in a point cloud defined by the pruned LiDAR dataset. The object detection may be used by the computing device to facilitate at least one autonomous driving operation.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: July 30, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Arsenii Saranin, Basel Alghanem, G Peter K. Carr, Jason Ziglar, Benjamin Ballard
  • Patent number: 11972184
    Abstract: Systems and methods for designing a robotic system architecture are disclosed. The methods include generating a model that defines one or more requirements for a robotic device for a mapping between a software graph and a hardware graph. The model is used for allocating a plurality of computational tasks in a computational path included in the software graph to a plurality of hardware components of the robotic device to yield a robotic system architecture. The methods also include using the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph, where the robotic system architecture is optimized to meet one or more latency requirements.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: April 30, 2024
    Assignee: Argo AI, LLC
    Inventor: Jason Ziglar
  • Publication number: 20230273976
    Abstract: Systems and methods for object detection. The methods include, by a computing device: obtaining a plurality of intensity values denoting at least a difference in a first location of at least one object in a first image and a second location of the at least one object in a second image; converting the intensity values to 3D position values; inputting the 3D position values into a classifier algorithm to obtain classifications for data points of a 3D point cloud (each of the classifications comprising a foreground classification or a background classification); and using the classifications to detect at least one object which is located in a foreground or a background.
    Type: Application
    Filed: May 3, 2023
    Publication date: August 31, 2023
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Patent number: 11645364
    Abstract: Systems and methods for object detection. The methods comprise, by a computing device: obtaining a plurality of intensity values denoting at least a difference in a first location of at least one object in a first image and a second location of the at least one object in a second image; converting the intensity values to 3D position values; inputting the 3D position values into a classifier algorithm to obtain classifications for data points of a 3D point cloud (each of the classifications comprising a foreground classification or a background classification); and using the classifications to detect at least one object which is located in a foreground or a background.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: May 9, 2023
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Publication number: 20220374659
    Abstract: Systems and methods for object detection. The methods comprise, by a computing device: obtaining a plurality of intensity values denoting at least a difference in a first location of at least one object in a first image and a second location of the at least one object in a second image; converting the intensity values to 3D position values; inputting the 3D position values into a classifier algorithm to obtain classifications for data points of a 3D point cloud (each of the classifications comprising a foreground classification or a background classification); and using the classifications to detect at least one object which is located in a foreground or a background.
    Type: Application
    Filed: August 2, 2022
    Publication date: November 24, 2022
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Publication number: 20220292241
    Abstract: Systems and methods for designing a robotic system architecture are disclosed. The methods include generating a model that defines one or more requirements for a robotic device for a mapping between a software graph and a hardware graph. The model is used for allocating a plurality of computational tasks in a computational path included in the software graph to a plurality of hardware components of the robotic device to yield a robotic system architecture. The methods also include using the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph, where the robotic system architecture is optimized to meet one or more latency requirements.
    Type: Application
    Filed: May 25, 2022
    Publication date: September 15, 2022
    Inventor: Jason Ziglar
  • Patent number: 11443147
    Abstract: Systems and methods for operating a vehicle. The methods comprise, by a processor: obtaining a pair of stereo images captured by a stereo camera; processing the pair of stereo images to generate a disparity map comprising a plurality of pixels defined by intensity values; converting each intensity value to a 3D position in a map (each 3D position defining a location of a data point in a point cloud); performing a hierarchical decision tree classification to determine a classification for each data point in the point cloud (the classification being a foreground classification or a background classification); and using the classifications to facilitate autonomous control of the vehicle.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: September 13, 2022
    Assignee: Argo AI, LLC
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Publication number: 20220188578
    Abstract: Systems and methods for operating a vehicle. The methods comprise, by a processor: obtaining a pair of stereo images captured by a stereo camera; processing the pair of stereo images to generate a disparity map comprising a plurality of pixels defined by intensity values; converting each intensity value to a 3D position in a map (each 3D position defining a location of a data point in a point cloud); performing a hierarchical decision tree classification to determine a classification for each data point in the point cloud (the classification being a foreground classification or a background classification); and using the classifications to facilitate autonomous control of the vehicle.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Inventors: Xiaoyan Hu, Lingyuan Wang, Michael Happold, Jason Ziglar
  • Patent number: 11354473
    Abstract: Systems and methods for designing a robotic system architecture are disclosed. The methods include defining a software graph including a first plurality of nodes, and a first plurality of edges representative of data flow between the first plurality of tasks, and defining a hardware graph including a second plurality of nodes, and a second plurality of edges. The methods may include mapping the software graph to the hardware graph, modeling a latency associated with a computational path included in the software graph for the mapping between the software graph and the hardware graph, allocating a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture using the latency, and using the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: June 7, 2022
    Assignee: Argo AI, LLC
    Inventor: Jason Ziglar
  • Publication number: 20220128700
    Abstract: Systems/methods for object detection. The methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by a computing device, the LiDAR dataset and at least one image to detect an object that is in proximity to the autonomous vehicle. The object is detected by: generating a pruned LiDAR dataset by reducing a total number of points contained in the LiDAR dataset; and detecting the object in a point cloud defined by the pruned LiDAR dataset. The object detection may be used by the computing device to facilitate at least one autonomous driving operation.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 28, 2022
    Inventors: Arsenii Saranin, Basel Alghanem, G. Peter K. Carr, Jason Ziglar, Benjamin Ballard
  • Publication number: 20220128702
    Abstract: Systems and methods for object detection. Object detection may be used to control autonomous vehicle(s). For example, the methods comprise: obtaining, by a computing device, a LiDAR dataset generated by a LiDAR system of the autonomous vehicle; and using, by the computing device. The LiDAR dataset and image(s) are used to detect an object that is in proximity to the autonomous vehicle. The object is detected by: computing a distribution of object detections that each point of the LiDAR dataset is likely to be in; creating a plurality of segments of LiDAR data points using the distribution of object detections; and detecting the object in a point cloud defined by the LiDAR dataset based on the plurality of segments of LiDAR data points. The object detection may be used to facilitate at least one autonomous driving operation.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 28, 2022
    Inventors: Jason Ziglar, Arsenii Saranin, Basel Alghanem, G. Peter K. Carr
  • Patent number: 9322148
    Abstract: A mapping system includes a pose sensor, a mapping sensor, a database defining a work surface, and a controller. The controller is configured to receive pose signals and determine the position and the orientation of the machine and receive mapping signals and determine a plurality of raw data points. The controller further determines a plurality of machine points defining a position of a portion of the machine and filters the plurality of raw data points based upon the plurality of machine points to define a plurality of filtered data points. The database may be updated with the plurality of filtered data points.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: April 26, 2016
    Assignees: Caterpillar Inc., Carnegie-Mellon University
    Inventors: Kenneth L. Stratton, Louis Bojarski, Peter Rander, Randon Warner, Jason Ziglar
  • Publication number: 20150361642
    Abstract: A mapping system includes a pose sensor, a mapping sensor, a database defining a work surface, and a controller. The controller is configured to receive pose signals and determine the position and the orientation of the machine and receive mapping signals and determine a plurality of raw data points. The controller further determines a plurality of machine points defining a position of a portion of the machine and filters the plurality of raw data points based upon the plurality of machine points to define a plurality of filtered data points. The database may be updated with the plurality of filtered data points.
    Type: Application
    Filed: June 16, 2014
    Publication date: December 17, 2015
    Inventors: Kenneth L. Stratton, Louis Bojarski, Peter Rander, Randon Warner, Jason Ziglar
  • Publication number: 20100026555
    Abstract: An arrangement for obstacle detection in autonomous vehicles wherein two significant data manipulations are employed in order to provide a more accurate read of potential obstacles and thus contribute to more efficient and effective operation of an autonomous vehicle. A first data manipulation involves distinguishing between those potential obstacles that are surrounded by significant background scatter in a radar diagram and those that are not, wherein the latter are more likely to represent binary obstacles that are to be avoided. A second data manipulation involves updating a radar image to the extent possible as an object comes into closer range. Preferably, the first aforementioned data manipulation may be performed via context filtering, while the second aforementioned data manipulation may be performed via blob-based hysteresis.
    Type: Application
    Filed: June 11, 2007
    Publication date: February 4, 2010
    Inventors: William L. WHITTAKER, Joshua Johnston, Jason Ziglar