Patents by Inventor John Patrick Kaufhold
John Patrick Kaufhold has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10643123Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.Type: GrantFiled: May 11, 2018Date of Patent: May 5, 2020Assignee: General Dynamics Mission Systems, Inc.Inventor: John Patrick Kaufhold
-
Publication number: 20200074235Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: ApplicationFiled: October 25, 2019Publication date: March 5, 2020Applicant: General Dynamics Mission Systems, Inc.Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
-
Publication number: 20200065627Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: ApplicationFiled: October 25, 2019Publication date: February 27, 2020Applicant: General Dynamics Mission Systems, Inc.Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
-
Publication number: 20200065625Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: ApplicationFiled: October 25, 2019Publication date: February 27, 2020Applicant: General Dynamics Mission Systems, Inc.Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
-
Publication number: 20200065626Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: ApplicationFiled: October 25, 2019Publication date: February 27, 2020Applicant: General Dynamics Mission Systems, Inc.Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
-
Patent number: 10504004Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: GrantFiled: September 15, 2017Date of Patent: December 10, 2019Assignee: GENERAL DYNAMICS MISSION SYSTEMS, INC.Inventors: John Patrick Kaufhold, Jennifer Alexander Sleeman
-
Publication number: 20190080205Abstract: Embodiments of the present invention relate to systems and methods for improving the training of machine learning systems to recognize certain objects within a given image by supplementing an existing sparse set of real-world training images with a comparatively dense set of realistic training images. Embodiments may create such a dense set of realistic training images by training a machine learning translator with a convolutional autoencoder to translate a dense set of synthetic images of an object into more realistic training images. Embodiments may also create a dense set of realistic training images by training a generative adversarial network (“GAN”) to create realistic training images from a combination of the existing sparse set of real-world training images and either Gaussian noise, translated images, or synthetic images.Type: ApplicationFiled: September 15, 2017Publication date: March 14, 2019Applicant: Deep Learning Analytics, LLCInventors: John Patrick KAUFHOLD, Jennifer Sleeman
-
Publication number: 20180260688Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.Type: ApplicationFiled: May 11, 2018Publication date: September 13, 2018Applicant: Deep Learning Analytics, LLCInventor: John Patrick KAUFHOLD
-
Patent number: 9990687Abstract: Embodiments of the present invention are directed to providing new systems and methods for using deep learning techniques to generate embeddings for high dimensional data objects that can both simulate prior art embedding algorithms and also provide superior performance compared to the prior art methods.Type: GrantFiled: March 9, 2017Date of Patent: June 5, 2018Assignee: Deep Learning Analytics, LLCInventors: John Patrick Kaufhold, Michael Jeremy Trammell
-
Patent number: 9978013Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.Type: GrantFiled: July 8, 2015Date of Patent: May 22, 2018Assignee: Deep Learning Analytics, LLCInventor: John Patrick Kaufhold
-
Publication number: 20160019458Abstract: The present invention is directed to systems and methods for detecting objects in a radar image stream. Embodiments of the invention can receive a data stream from radar sensors and use a deep neural network to convert the received data stream into a set of semantic labels, where each semantic label corresponds to an object in the radar data stream that the deep neural network has identified. Processing units running the deep neural network may be collocated onboard an airborne vehicle along with the radar sensor(s). The processing units can be configured with powerful, high-speed graphics processing units or field-programmable gate arrays that are low in size, weight, and power requirements. Embodiments of the invention are also directed to providing innovative advances to object recognition training systems that utilize a detector and an object recognition cascade to analyze radar image streams in real time.Type: ApplicationFiled: July 8, 2015Publication date: January 21, 2016Applicant: Deep Learning Analytics, LLCInventor: John Patrick KAUFHOLD
-
Patent number: 8340373Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.Type: GrantFiled: May 23, 2007Date of Patent: December 25, 2012Assignee: General Electric CompanyInventors: Bernhard Erich Hermann Claus, John Patrick Kaufhold
-
Publication number: 20110293200Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.Type: ApplicationFiled: May 23, 2007Publication date: December 1, 2011Inventors: Bernhard Erich Claus, John Patrick Kaufhold
-
Patent number: 7653263Abstract: A technique is provided for comparative image analysis and/or change detection using computer assisted detection and/or diagnosis (CAD) algorithms. The technique includes registering two or more images, comparing the images with one another to generate a change map, and detecting anomalies in the images based on the change map.Type: GrantFiled: June 30, 2005Date of Patent: January 26, 2010Assignee: General Electric CompanyInventors: Frederick Wilson Wheeler, Bernhard Erich Hermann Claus, John Patrick Kaufhold, Jeffrey Wayne Eberhard, Mark Lewis Grabb, Cynthia Elizabeth Landberg
-
Patent number: 7653229Abstract: Some configurations of method for reconstructing a volumetric image of an object include obtaining a tomosynthesis projection dataset of an object. The method also includes utilizing the tomosynthesis projection dataset and additional information about the object to minimize a selected energy function or functions to satisfy a selected set of constraints. Alternatively, constraints are applied to a reconstructed volumetric image in order to obtain an updated volumetric image. A 3D volume representative of the imaged object is thereby obtained in which each voxel is reconstructed and a correspondence indicated to a single one of the component material classes.Type: GrantFiled: December 23, 2003Date of Patent: January 26, 2010Assignee: General Electric CompanyInventors: John Patrick Kaufhold, Bernhard Erich Hermann Claus
-
Publication number: 20080292217Abstract: A technique is provided for generating quantitative projection images from projection images. The pixels of the quantitative projection images correspond to quantitative composition estimates of two or more materials. The quantitative projection images are reconstructed to generate a quantitative volume in which each voxel value corresponds quantitatively to the two or more materials or a mixture of the two or more materials.Type: ApplicationFiled: May 23, 2007Publication date: November 27, 2008Inventors: Bernhard Erich Hermann Claus, John Patrick Kaufhold
-
Patent number: 7440603Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.Type: GrantFiled: February 26, 2008Date of Patent: October 21, 2008Assignee: General Electric CompanyInventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B Avinash, John Michael Sabol
-
Publication number: 20080144767Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.Type: ApplicationFiled: February 26, 2008Publication date: June 19, 2008Applicant: General Electric CompanyInventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B. Avinash, John Michael Sabol
-
Patent number: 7352885Abstract: Briefly in accordance with one embodiment, the present technique provides a multi-energy tomosynthesis imaging system. The system includes an X-ray source configured to emit X-rays from multiple locations within a limited angular range relative to an imaging volume. The imaging system also includes a digital detector with an array of detector elements to generate images in response to the emitted X-rays. The imaging system further includes a detector acquisition circuitry to acquire the images from the digital detector. The imaging system may also include a processing circuitry configured to decompose plurality of images based on energy characteristics and to reconstruct the plurality of images to generate a three-dimensional multi-energy tomosynthesis image.Type: GrantFiled: September 30, 2004Date of Patent: April 1, 2008Assignee: General Electric CompanyInventors: Jeffrey Wayne Eberhard, John Patrick Kaufhold, Bernhard Erich Hermann Claus, Kadri Nizar Jabri, Gopal B Avinash, John Michael Sabol
-
Patent number: 7149335Abstract: A method for facilitating an enhancement of a visibility of an object in an x-ray image includes generating an x-ray image including at least one object, generating an estimate of a background surrounding the at least one object, subtracting the background estimate from the x-ray image to generate an estimate of pixel intensities due the object, mapping the estimate of pixel intensities due to the object, and combining the mapped estimate of pixel intensities due to the object and the x-ray image to generate an enhanced image.Type: GrantFiled: September 27, 2002Date of Patent: December 12, 2006Assignee: General Electric CompanyInventor: John Patrick Kaufhold