Patents by Inventor Hyukseong Kwon

Hyukseong Kwon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10574967
    Abstract: A method for performing an operation on an object includes capturing a plurality of images of the object. Each image is a different view of the object. The method also includes generating a sparse 3D point cloud from the plurality of images. The sparse 3D point cloud defines a 3D model of the object. The sparse 3D point cloud includes a multiplicity of missing points that each correspond to a hole in the 3D model that renders the 3D model unusable for performing the operation on the object. The method additionally includes performing curvature-based upsampling to generate a denser 3D point cloud. The denser 3D point cloud includes a plurality of filled missing points. The missing points are filled from performance of the curvature-based upsampling. The denser 3D point cloud defines a dense 3D model that is useable for performing the operation on the object.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: February 25, 2020
    Assignee: The Boeing Company
    Inventors: Hyukseong Kwon, Kyungnam Kim, Heiko Hoffmann
  • Publication number: 20200050191
    Abstract: Systems and method are provided for controlling an autonomous vehicle. In one embodiment, a method includes: receiving sensor data from one or more sensors of the vehicle; processing, by a processor, the sensor data to determine object data indicating at least one element within a scene of an environment of the vehicle; processing, by the processor, the sensor data to determine a ground truth data associated with the element; determining, by the processor, an uncertainty model based on the ground truth data and the object data; training, by the processor, vehicle functions based on the uncertainty model; and controlling the vehicle based on the trained vehicle functions.
    Type: Application
    Filed: August 7, 2018
    Publication date: February 13, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: HYUKSEONG KWON, KYUNGNAM KIM
  • Patent number: 10409279
    Abstract: A system and method is taught for data processing in an autonomous vehicle control system. Using information is acquired from the vehicle, network interface, and sensors mounted on the vehicle, the system can perceive situations around it with much less complexity in computation without losing crucial details, and then make navigation and control decisions. The system and method are operative to generate situation aware events, store them, and recall to predict situations for autonomous driving.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: September 10, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Hyukseong Kwon, Youngkwan Cho, Rajan Bhattacharyya, Michael J. Daily
  • Publication number: 20190005162
    Abstract: Methods, apparatus, and articles of manufacture are disclosed to generate a synthetic point cloud of a spacecraft. An example apparatus includes a point cloud generator to generate a first synthetic point cloud of a first simulated space vehicle based on a simulated illumination source and a simulated image sensor, where the simulated illumination source and the simulated image sensor is operatively coupled to a second simulated space vehicle at a first position, where the simulated image sensor measures a parameter of the first simulated space vehicle, where the simulated illumination source uses a first configuration.
    Type: Application
    Filed: June 29, 2017
    Publication date: January 3, 2019
    Inventors: Hyukseong Kwon, Kyungnam Kim
  • Publication number: 20180322640
    Abstract: Described is a system for detecting moving objects. During operation, the system obtains ego-motion velocity data of a moving platform and generates a predicted image of a scene proximate the moving platform by projecting three-dimensional (3D) data into an image plane based on pixel values of the scene. A contrast image is generated based on a difference between the predicted image and an actual image taken at a next step in time. Next, an actionable prediction map is then generated based on the contrast mage. Finally, one or more moving objects may be detected based on the actionable prediction map.
    Type: Application
    Filed: April 23, 2018
    Publication date: November 8, 2018
    Inventors: Kyungnam Kim, Hyukseong Kwon, Heiko Hoffmann
  • Publication number: 20180276793
    Abstract: A method for performing an operation on an object includes capturing a plurality of images of the object. Each image is a different view of the object. The method also includes generating a sparse 3D point cloud from the plurality of images. The sparse 3D point cloud defines a 3D model of the object. The sparse 3D point cloud includes a multiplicity of missing points that each correspond to a hole in the 3D model that renders the 3D model unusable for performing the operation on the object. The method additionally includes performing curvature-based upsampling to generate a denser 3D point cloud. The denser 3D point cloud includes a plurality of filled missing points. The missing points are filled from performance of the curvature-based upsampling. The denser 3D point cloud defines a dense 3D model that is useable for performing the operation on the object.
    Type: Application
    Filed: March 23, 2017
    Publication date: September 27, 2018
    Inventors: Hyukseong Kwon, Kyungnam Kim, Heiko Hoffmann
  • Publication number: 20180217595
    Abstract: A system and method is taught for data processing in an autonomous vehicle control system. Using information is acquired from the vehicle, network interface, and sensors mounted on the vehicle, the system can perceive situations around it with much less complexity in computation without losing crucial details, and then make navigation and control decisions. The system and method are operative to generate situation aware events, store them, and recall to predict situations for autonomous driving.
    Type: Application
    Filed: January 31, 2017
    Publication date: August 2, 2018
    Inventors: HYUKSEONG KWON, YOUNGKWAN CHO, RAJAN BHATTACHARYYA, MICHAEL J. DAILY
  • Publication number: 20180217603
    Abstract: A system and method is taught for data processing where an environment around the self-vehicle is encoded into ego centric and geocentric overlapping coordinate systems. The overlapping coordinate systems are then divided into adaptively sized grid cells according to characteristics of environments and the self-vehicle status. Each grid cell is defined with one of representative event patterns and risk values to the self-vehicle. The autonomous driving system is then operative to provide a real time assessment of the surrounding environment in response to the grid cell data. And temporal sequences of the grid cell data are stored in the episodic memory and recalled from it during driving.
    Type: Application
    Filed: January 31, 2017
    Publication date: August 2, 2018
    Inventors: HYUKSEONG KWON, YOUNGKWAN CHO, RAJAN BHATTACHARYYA
  • Publication number: 20180210939
    Abstract: Described is a system for an episodic memory used by an automated platform. The system acquires data from an episodic memory that comprises an event database, an event-sequence graph, and an episode list. Using the event-sequence graph, the system identifies a closest node to a current environment for the automated platform. Based on the closest node and using a hash function or key based on the hash function, the system retrieves from the event database an episode that corresponds to the closest node, the episode including a sequence of events. Behavior of the automated platform in the current environment is guided based on the data from the episodic memory.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 26, 2018
    Inventors: Youngkwan Cho, Hyukseong Kwon, Rajan Bhattacharyya
  • Patent number: 9972067
    Abstract: A method for three-dimensional point cloud registration includes generating a first upsampled three-dimensional point cloud by identifying at least one missing point in the three-dimensional point cloud, determining an intensity of neighboring pixels, filling the at least one missing point in the three-dimensional point cloud with a filler point using depth information from depth values in the three-dimensional point cloud that correspond with the neighboring pixels, generating a second upsampled three-dimensional point cloud by determining at least one local area of the first upsampled three-dimensional point cloud, determining entropies of pixels in the two-dimensional image that correspond with the at least one local area, adding at least one point to the at least one local area based on the entropies of pixels in the two-dimensional image and a scaled entropy threshold, and registering the second upsampled three-dimensional point cloud with a predetermined three-dimensional model.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: May 15, 2018
    Assignee: The Boeing Company
    Inventors: Hyukseong Kwon, Kyungnam Kim
  • Publication number: 20180128625
    Abstract: A tracking system for tracking a moving target includes a processor and a tracking module that implements an iterative process for tracking the moving target. The iterative process includes receiving sensor data for a current state of the moving target. The process also includes applying a filter engine to the sensor data and a measure of error of a previous prediction of the current state of the target to produce a first estimate for an upcoming state and a first measure of error thereof. The process also includes receiving from at least one second tracking system, at least one second estimate for the upcoming state and second measure of error thereof. The process further includes defining a consensus estimate for the upcoming state and a consensus measure of error thereof using the first estimate and the at least one second estimate and the first and second measure of error thereof.
    Type: Application
    Filed: March 31, 2017
    Publication date: May 10, 2018
    Inventors: Hyukseong Kwon, David W. Payton, Chong Ding
  • Publication number: 20180128621
    Abstract: An apparatus is provided for tracking a target moving between states using an iterative process. The apparatus receives sensor data for a current state i, and applies a cubature information filter and an H-infinity filter thereto to respectively produce an estimate for the upcoming state i+1 and a measure of error thereof, and adjust the measure of error. The apparatus then defines a consensus estimate of the upcoming state i+1 and a consensus adjusted measure of error thereof from the estimate and adjusted measure of error, and a second estimate and second adjusted measure of error that is received from at least one second apparatus tracking the target. The apparatus then applies a cubature information filter to the consensus estimate of the upcoming state i+1 and the consensus adjusted measure of error to predict the upcoming state i+1.
    Type: Application
    Filed: November 4, 2016
    Publication date: May 10, 2018
    Inventors: Hyukseong Kwon, David W. Payton, Chong Ding
  • Publication number: 20180101932
    Abstract: A method for three-dimensional point cloud registration includes generating a first upsampled three-dimensional point cloud by identifying at least one missing point in the three-dimensional point cloud, determining an intensity of neighboring pixels filling the at least one missing point in the three-dimensional point cloud with a filler point using depth information from depth values in the three-dimensional point cloud that correspond with the neighboring pixels, generating a second upsampled three-dimensional point cloud by determining at least one local area of the first upsampled three-dimensional point cloud, determining entropies of pixels in the two-dimensional image that correspond with the at least one local area, adding at least one point to the at least one local area based on the entropies of pixels in the two-dimensional image and a scaled entropy threshold, and registering the second upsampled three-dimensional point cloud with a predetermined three-dimensional model.
    Type: Application
    Filed: October 11, 2016
    Publication date: April 12, 2018
    Inventors: Hyukseong KWON, Kyungnam KIM
  • Patent number: 9727785
    Abstract: A method and apparatus for processing images. A set of candidate targets is identified in a first image and in a second image that corresponds with the first image. A set of first scores is generated for the set of candidate targets using the first image. A set of second scores is generated for the set of candidate targets using the second image. A set of final scores is computed for the set of candidate targets using the set of first scores and the set of second scores. A determination is made as to which of the set of candidate targets is a target of interest based on the set of final scores.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: August 8, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Hyukseong Kwon, Kyungnam Kim, Yuri Owechko
  • Patent number: 9715639
    Abstract: A method and apparatus for performing target detection. A potential object is detected within a candidate chip in an image. The potential object is verified as a candidate object. The candidate object is classified as one of a candidate target or a background in response to the potential object being verified as the candidate object. The candidate target is verified as a target of interest in response to the candidate object being classified as the candidate target.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: July 25, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Hyukseong Kwon, Kyungnam Kim, Yuri Owechko
  • Publication number: 20160371530
    Abstract: A method and apparatus for processing images. A set of candidate targets is identified in a first image and in a second image that corresponds with the first image. A set of first scores is generated for the set of candidate targets using the first image. A set of second scores is generated for the set of candidate targets using the second image. A set of final scores is computed for the set of candidate targets using the set of first scores and the set of second scores. A determination is made as to which of the set of candidate targets is a target of interest based on the set of final scores.
    Type: Application
    Filed: June 18, 2015
    Publication date: December 22, 2016
    Inventors: Hyukseong Kwon, Kyungnam Kim, Yuri Owechko
  • Publication number: 20160371850
    Abstract: A method and apparatus for performing target detection. A potential object is detected within a candidate chip in an image. The potential object is verified as a candidate object. The candidate object is classified as one of a candidate target or a background in response to the potential object being verified as the candidate object. The candidate target is verified as a target of interest in response to the candidate object being classified as the candidate target.
    Type: Application
    Filed: June 18, 2015
    Publication date: December 22, 2016
    Inventors: Hyukseong Kwon, Kyungnam Kim, Yuri Owechko
  • Publication number: 20160350936
    Abstract: Systems and methods of detecting dead pixels of image frames are described including receiving a sequence of image frames, aligning, from the sequence of image frames, pairs of image frames, and for a given pair of image frames, determining differences in intensity of corresponding pixels between the aligned pair of image frames. The method also includes, based on the differences in intensity of corresponding pixels between the aligned pair of image frames, generating mask images indicative of areas in the pairs of image frames having moving objects. The method further includes determining, within the mask images, common pixel locations indicative of areas in the pairs of image frames having moving objects over a portion of the sequence of image frames, and based on a number of the common pixel locations for a given pixel location being above a threshold, identifying the given pixel location as a dead pixel.
    Type: Application
    Filed: May 27, 2015
    Publication date: December 1, 2016
    Inventors: Dmitriy Korchev, Yuri Owechko, Hyukseong Kwon
  • Patent number: 9501839
    Abstract: Systems and methods of detecting dead pixels of image frames are described including receiving a sequence of image frames, aligning, from the sequence of image frames, pairs of image frames, and for a given pair of image frames, determining differences in intensity of corresponding pixels between the aligned pair of image frames. The method also includes, based on the differences in intensity of corresponding pixels between the aligned pair of image frames, generating mask images indicative of areas in the pairs of image frames having moving objects. The method further includes determining, within the mask images, common pixel locations indicative of areas in the pairs of image frames having moving objects over a portion of the sequence of image frames, and based on a number of the common pixel locations for a given pixel location being above a threshold, identifying the given pixel location as a dead pixel.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: November 22, 2016
    Assignee: The Boeing Company
    Inventors: Dmitriy Korchev, Yuri Owechko, Hyukseong Kwon
  • Publication number: 20160328860
    Abstract: Within examples, methods and systems for occlusion-robust object fingerprinting using fusion of multiple sub-region signatures are described. An example method includes receiving an indication of an object within a sequence of video frames, selecting from the sequence of video frames a reference image frame indicative of the object and candidate image frames representative of possible portions of the object, dividing the reference image frame and the candidate image frames into multiple cells, defining for the reference image frame and the candidate image frames sub-regions of the multiple cells such that the sub-regions include the same cells for overlapping representations and the sub-regions include multiple sizes, comparing characteristics of sub-regions of the reference image frame to characteristics of sub-regions of the candidate image frames and determining similarity measurements, and based on the similarity measurements, tracking the object within the sequence of video frames.
    Type: Application
    Filed: May 6, 2015
    Publication date: November 10, 2016
    Inventors: Hyukseong Kwon, Yuri Owechko, Kyungnam Kim