Patents by Inventor Zachary D. Jorgensen

Zachary D. Jorgensen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11507052
    Abstract: A method and apparatus for manufacturing a part. The part is designed using a CAD system to generate a CAD part model of the part. Features of the part are identified from the CAD part model of the part. A parametric specification of the part is generated using the features of the part. The parametric specification of the part is saved as a parametric part model. The parametric part model is used to fabricate the part.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: November 22, 2022
    Assignee: The Boeing Company
    Inventors: Oliver William Sykes, Zachary D. Jorgensen, Daniel S. ReMine
  • Patent number: 10997748
    Abstract: A method of machine learning model development includes receiving a plurality of images of a scene, and performing an unsupervised image selection. This includes applying the images to a pre-trained model to extract and embed the images with respective feature vectors, and performing a cluster analysis to group the images in a clusters based on correlations among the respective feature vectors. The unsupervised image selection also includes selecting at least some but not all images in each of the clusters, and any images considered outliers that belong to none of the clusters, for a subset of the images that includes fewer than all of the images. And the method includes receiving user input to label or labeling objects depicted in the subset of the images to produce a training set of images, and building a machine learning model for object detection using the training set of images.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: May 4, 2021
    Assignee: The Boeing Company
    Inventors: Tyler Staudinger, Zachary D. Jorgensen
  • Publication number: 20200334856
    Abstract: A method of machine learning model development includes receiving a plurality of images of a scene, and performing an unsupervised image selection. This includes applying the images to a pre-trained model to extract and embed the images with respective feature vectors, and performing a cluster analysis to group the images in a clusters based on correlations among the respective feature vectors. The unsupervised image selection also includes selecting at least some but not all images in each of the clusters, and any images considered outliers that belong to none of the clusters, for a subset of the images that includes fewer than all of the images. And the method includes receiving user input to label or labeling objects depicted in the subset of the images to produce a training set of images, and building a machine learning model for object detection using the training set of images.
    Type: Application
    Filed: April 19, 2019
    Publication date: October 22, 2020
    Inventors: Tyler Staudinger, Zachary D. Jorgensen
  • Publication number: 20200265630
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Applicant: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan
  • Patent number: 10643368
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: May 5, 2020
    Assignee: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan
  • Publication number: 20200096967
    Abstract: A method and apparatus for manufacturing a part. The part is designed using a CAD system to generate a CAD part model of the part. Features of the part are identified from the CAD part model of the part. A parametric specification of the part is generated using the features of the part. The parametric specification of the part is saved as a parametric part model. The parametric part model is used to fabricate the part.
    Type: Application
    Filed: September 24, 2018
    Publication date: March 26, 2020
    Inventors: Oliver William Sykes, Zachary D. Jorgensen, Daniel S. ReMine
  • Publication number: 20180374253
    Abstract: A set of 3D user-designed images is used to create a high volume of realistic scenes or images which can be used for training and testing deep learning machines. The system creates a high volume of scenes having a wide variety of environmental, weather-related factors as well as scenes that take into account camera noise, distortion, angle of view, and the like. A generative modeling process is used to vary objects contained in an image so that more images, each one distinct, can be used to train the deep learning model without the inefficiencies of creating videos of actual, real life scenes. Object label data is known by virtue of a designer selecting an object from an image database and placing it in the scene. This and other methods care used to artificially create new scenes that do not have to be recorded in real-life conditions and that do not require costly and time-consuming, manual labelling or tagging of objects.
    Type: Application
    Filed: June 27, 2017
    Publication date: December 27, 2018
    Applicant: The Boeing Company
    Inventors: Huafeng Yu, Tyler C. Staudinger, Zachary D. Jorgensen, Jan Wei Pan