Patents by Inventor John Patrick Kaufhold

John Patrick Kaufhold has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10643123
    Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: May 5, 2020
    Assignee: General Dynamics Mission Systems, Inc.
    Inventor: John Patrick Kaufhold
  • Publication number: 20200074235
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Application
    Filed: October 25, 2019
    Publication date: March 5, 2020
    Applicant: General Dynamics Mission Systems, Inc.
    Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
  • Publication number: 20200065626
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 27, 2020
    Applicant: General Dynamics Mission Systems, Inc.
    Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
  • Publication number: 20200065627
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 27, 2020
    Applicant: General Dynamics Mission Systems, Inc.
    Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
  • Publication number: 20200065625
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Application
    Filed: October 25, 2019
    Publication date: February 27, 2020
    Applicant: General Dynamics Mission Systems, Inc.
    Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
  • Patent number: 10504004
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: December 10, 2019
    Assignee: GENERAL DYNAMICS MISSION SYSTEMS, INC.
    Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
  • Publication number: 20190080205
    Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.
    Type: Application
    Filed: September 15, 2017
    Publication date: March 14, 2019
    Applicant: Deep Learning Analytics, LLC
    Inventors: John Patrick KAUFHOLD, Jennifer Sleeman
  • Publication number: 20180260688
    Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.
    Type: Application
    Filed: May 11, 2018
    Publication date: September 13, 2018
    Applicant: Deep Learning Analytics, LLC
    Inventor: John Patrick KAUFHOLD
  • Patent number: 9990687
    Abstract: Embodiments of the present invention are directed to providing new systems and methods for using deep learning techniques to generate embeddings for high dimensional data objects that can both simulate prior art embedding algorithms and also provide superior performance compared to the prior art methods.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: June 5, 2018
    Assignee: Deep Learning Analytics, LLC
    Inventors: John Patrick Kaufhold, Michael Jeremy Trammell
  • Patent number: 9978013
    Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: May 22, 2018
    Assignee: Deep Learning Analytics, LLC
    Inventor: John Patrick Kaufhold
  • Publication number: 20160019458
    Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 21, 2016
    Applicant: Deep Learning Analytics, LLC
    Inventor: John Patrick KAUFHOLD
  • Patent number: 8340373
    Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.
    Type: Grant
    Filed: May 23, 2007
    Date of Patent: December 25, 2012
    Assignee: General Electric Company
    Inventors: Bernhard Erich Hermann Claus, John Patrick Kaufhold
  • Publication number: 20110293200
    Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.
    Type: Application
    Filed: May 23, 2007
    Publication date: December 1, 2011
    Inventors: Bernhard Erich Claus, John Patrick Kaufhold
  • Patent number: 7653263
    Abstract: A technique is provided for comparative image analysis and/or change detection using computer assisted detection and/or diagnosis (CAD) algorithms. The technique includes registering two or more images, comparing the images with one another to generate a change map, and detecting anomalies in the images based on the change map.
    Type: Grant
    Filed: June 30, 2005
    Date of Patent: January 26, 2010
    Assignee: General Electric Company
    Inventors: Frederick Wilson Wheeler, Bernhard Erich Hermann Claus, John Patrick Kaufhold, Jeffrey Wayne Eberhard, Mark Lewis Grabb, Cynthia Elizabeth Landberg
  • Patent number: 7653229
    Abstract: Some configurations of method for reconstructing a volumetric image of an object include obtaining a tomosynthesis projection dataset of an object. The method also includes utilizing the tomosynthesis projection dataset and additional information about the object to minimize a selected energy function or functions to satisfy a selected set of constraints. Alternatively, constraints are applied to a reconstructed volumetric image in order to obtain an updated volumetric image. A 3D volume representative of the imaged object is thereby obtained in which each voxel is reconstructed and a correspondence indicated to a single one of the component material classes.
    Type: Grant
    Filed: December 23, 2003
    Date of Patent: January 26, 2010
    Assignee: General Electric Company
    Inventors: John Patrick Kaufhold, Bernhard Erich Hermann Claus
  • Publication number: 20080292217
    Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.
    Type: Application
    Filed: May 23, 2007
    Publication date: November 27, 2008
    Inventors: Bernhard Erich Hermann Claus, John Patrick Kaufhold
  • Patent number: 7440603
    Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.
    Type: Grant
    Filed: February 26, 2008
    Date of Patent: October 21, 2008
    Assignee: General Electric Company
    Inventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B Avinash, John Michael Sabol
  • Publication number: 20080144767
    Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.
    Type: Application
    Filed: February 26, 2008
    Publication date: June 19, 2008
    Applicant: General Electric Company
    Inventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B. Avinash, John Michael Sabol
  • Patent number: 7352885
    Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.
    Type: Grant
    Filed: September 30, 2004
    Date of Patent: April 1, 2008
    Assignee: General Electric Company
    Inventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B Avinash, John Michael Sabol
  • Patent number: 7149335
    Abstract: A method for facilitating an enhancement of a visibility of an object in an x-ray image includes generating an x-ray image including at least one object, generating an estimate of a background surrounding the at least one object, subtracting the background estimate from the x-ray image to generate an estimate of pixel intensities due the object, mapping the estimate of pixel intensities due to the object, and combining the mapped estimate of pixel intensities due to the object and the x-ray image to generate an enhanced image.
    Type: Grant
    Filed: September 27, 2002
    Date of Patent: December 12, 2006
    Assignee: General Electric Company
    Inventor: John Patrick Kaufhold