Patents by Inventor Hyukseong Kwon

Hyukseong Kwon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11350039
    Abstract: Described is a system for contrast and entropy-based perception adaption to optimize perception. The system is operable for receiving an input image of a scene with a camera system and detecting one or more objects (having perception data) in the input image. The perception data of the one or more objects is converted into probes, which are then converted into axioms using probabilistic signal temporal logic. The axioms are evaluated based on probe bounds. If the axioms are within the probe bounds, then results are provided; however, if the axioms are outside of the probe bounds, the system estimates optimal contrast bounds and entropy bounds as perception parameters. The contrast and entropy in the camera system are then adjusted based on the perception parameters.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 31, 2022
    Assignee: HRL Laboratories, LLC
    Inventors: Hyukseong Kwon, Amir M. Rahimi, Amit Agarwal, Rajan Bhattacharyya
  • Publication number: 20220161825
    Abstract: A system for interactive hypothesis estimation of multi-vehicle traffic for autonomous driving is provided. The system includes a sensor upon a host vehicle providing data regarding an operating environment of the host vehicle and a computerized device. The computerized device is operable to monitor the data from the sensor, identify a road surface based upon the data, and identify a neighborhood object based upon the data. The computerized device is further operable to determine a pressure score for the neighborhood object based upon a likelihood that the neighborhood object will conflict with the host vehicle based upon the road surface and the neighborhood object, selectively track the neighborhood object based upon the pressure score, and navigate the host vehicle based upon the tracking of the neighborhood object.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Scott Rad, Hyukseong Kwon, Rajan Bhattacharyya
  • Publication number: 20220155455
    Abstract: A system ground surface projection for autonomous driving of a host vehicle is provided. The system includes a LIDAR device of the host vehicle and a computerized device. The computerized device is operable to monitor data from the LIDAR device including a total point cloud. The total point cloud describes an actual ground surface in the operating environment of the host vehicle. The device is further operable to segment the total point cloud into a plurality of local point cloud and, for each of the local point clouds, determine a local polygon estimating a portion of the actual ground surface. The device is further operable to assemble the local polygons into a total estimated ground surface and navigate the host vehicle based upon the total estimated ground surface.
    Type: Application
    Filed: November 16, 2020
    Publication date: May 19, 2022
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Jacqueline Staiger, Hyukseong Kwon, Amit Agarwal, Rajan Bhattacharyya
  • Patent number: 11334767
    Abstract: Described is a system to evaluate and reduce perception error in object detection and recognition. The system includes a perception module that receives perception data (of an object(s)) from an environment proximate a mobile platform. Perception probes are generated that describe one or more characteristics of the objects. The perception probes are converted into probabilistic signal temporal logic (PSTL)-based constraints that provide axioms having statistical analysis of the perception probes. The axioms are evaluated to classify the perception probes as valid or erroneous. Optimal perception parameters are generated by solving an optimization problem based on the axioms, which allows the system to adjust the perception module based on the optimal perception parameters.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: May 17, 2022
    Assignee: HRL Laboratories, LLC
    Inventors: Hyukseong Kwon, Amir M. Rahimi, Amit Agarwal, Rajan Bhattacharyya
  • Patent number: 11288498
    Abstract: Described is a system for learning actions for image-based action recognition in an autonomous vehicle. The system separates a set of labeled action image data from a source domain into components. The components are mapped onto a set of action patterns, thereby creating a dictionary of action patterns. For each action in the set of labeled action data, a mapping is learned from the action pattern representing the action onto a class label for the action. The system then maps a set of new unlabeled target action image data onto a shared embedding feature space in which action patterns can be discriminated. For each target action in the set of new unlabeled target action image data, a class label for the target action is identified. Based on the identified class label, the autonomous vehicle is caused to perform a vehicle maneuver corresponding to the identified class label.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: March 29, 2022
    Assignee: HRL Laboratories, LLC
    Inventors: Amir M. Rahimi, Hyukseong Kwon, Heiko Hoffmann, Soheil Kolouri
  • Patent number: 11232296
    Abstract: Described is a system for action recognition through application of deep embedded clustering. For each image frame of an input video, the system computes skeletal joint-based pose features representing an action of a human in the image frame. Non-linear mapping of the pose features into an embedded action space is performed. Temporal classification of the action is performed and a set of categorical gesture-based labels is obtained. The set of categorical gesture-based labels is used to control movement of a machine.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: January 25, 2022
    Assignee: HRL Laboratories, LLC
    Inventors: Amir M. Rahimi, Heiko Hoffmann, Hyukseong Kwon
  • Publication number: 20210227117
    Abstract: Described is a system for contrast and entropy-based perception adaption to optimize perception. The system is operable for receiving an input image of a scene with a camera system and detecting one or more objects (having perception data) in the input image. The perception data of the one or more objects is converted into probes, which are then converted into axioms using probabilistic signal temporal logic. The axioms are evaluated based on probe bounds. If the axioms are within the probe bounds, then results are provided; however, if the axioms are outside of the probe bounds, the system estimates optimal contrast bounds and entropy bounds as perception parameters. The contrast and entropy in the camera system are then adjusted based on the perception parameters.
    Type: Application
    Filed: December 23, 2020
    Publication date: July 22, 2021
    Inventors: Hyukseong Kwon, Amir M. Rahimi, Amit Agarwal, Rajan Bhattacharyya
  • Publication number: 20210192219
    Abstract: Described is a system for detecting and correcting perception errors in a perception system. In operation, the system generates a list of detected objects from perception data of a scene, which allows for the generation of a list of background classes from backgrounds in the perception data associated with the list of detected objects. For each detected object in the list of detected objects, a closest background class is identified from the list of background classes. Vectors can then be used to determine a semantic feature, which is used to identify axioms. An optimal perception parameter is then generated, which is used to adjust perception parameters in the perception system to minimize perception errors.
    Type: Application
    Filed: March 2, 2021
    Publication date: June 24, 2021
    Inventors: Amit Agarwal, Amir M. Rahimi, Hyukseong Kwon, Rajan Bhattacharyya
  • Publication number: 20210089762
    Abstract: Described is a system for learning actions for image-based action recognition in an autonomous vehicle. The system separates a set of labeled action image data from a source domain into components. The components are mapped onto a set of action patterns, thereby creating a dictionary of action patterns. For each action in the set of labeled action data, a mapping is learned from the action pattern representing the action onto a class label for the action. The system then maps a set of new unlabeled target action image data onto a shared embedding feature space in which action patterns can be discriminated. For each target action in the set of new unlabeled target action image data, a class label for the target action is identified. Based on the identified class label, the autonomous vehicle is caused to perform a vehicle maneuver corresponding to the identified class label.
    Type: Application
    Filed: July 16, 2020
    Publication date: March 25, 2021
    Inventors: Amir M. Rahimi, Hyukseong Kwon, Heiko Hoffmann, Soheil Kolouri
  • Publication number: 20210089837
    Abstract: Described is a system to evaluate and reduce perception error in object detection and recognition. The system includes a perception module that receives perception data (of an object(s)) from an environment proximate a mobile platform. Perception probes are generated that describe one or more characteristics of the objects. The perception probes are converted into probabilistic signal temporal logic (PSTL)-based constraints that provide axioms having statistical analysis of the perception probes. The axioms are evaluated to classify the perception probes as valid or erroneous. Optimal perception parameters are generated by solving an optimization problem based on the axioms, which allows the system to adjust the perception module based on the optimal perception parameters.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Hyukseong Kwon, Amir M. Rahimi, Amit Agarwal, Rajan Bhattacharyya
  • Patent number: 10942029
    Abstract: A tracking system for tracking a moving target includes a processor and a tracking module that implements an iterative process for tracking the moving target. The iterative process includes receiving sensor data for a current state of the moving target. The process also includes applying a filter engine to the sensor data and a measure of error of a previous prediction of the current state of the target to produce a first estimate for an upcoming state and a first measure of error thereof. The process also includes receiving from at least one second tracking system, at least one second estimate for the upcoming state and second measure of error thereof. The process further includes defining a consensus estimate for the upcoming state and a consensus measure of error thereof using the first estimate and the at least one second estimate and the first and second measure of error thereof.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 9, 2021
    Assignee: The Boeing Company
    Inventors: Hyukseong Kwon, David W. Payton, Chong Ding
  • Patent number: 10896202
    Abstract: Described is a system for an episodic memory used by an automated platform. The system acquires data from an episodic memory that comprises an event database, an event-sequence graph, and an episode list. Using the event-sequence graph, the system identifies a closest node to a current environment for the automated platform. Based on the closest node and using a hash function or key based on the hash function, the system retrieves from the event database an episode that corresponds to the closest node, the episode including a sequence of events. Behavior of the automated platform in the current environment is guided based on the data from the episodic memory.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: January 19, 2021
    Assignee: HRL Laboratories, LLC
    Inventors: Youngkwan Cho, Hyukseong Kwon, Rajan Bhattacharyya
  • Publication number: 20210012100
    Abstract: Described is a system for action recognition through application of deep embedded clustering. For each image frame of an input video, the system computes skeletal joint-based pose features representing an action of a human in the image frame. Non-linear mapping of the pose features into an embedded action space is performed. Temporal classification of the action is performed and a set of categorical gesture-based labels is obtained. The set of categorical gesture-based labels is used to control movement of a machine.
    Type: Application
    Filed: May 13, 2020
    Publication date: January 14, 2021
    Inventors: Heiko Hoffmann, Heiko Hoffmann, Hyukseong Kwon
  • Publication number: 20200307586
    Abstract: A method, autonomous vehicle and system for operating an autonomous vehicle. A sensor obtains data of an agent. A processor determines a measure of complexity of the environment in which the autonomous vehicle is operating from the sensor data, selects a control scheme for operating the autonomous vehicle based on the determined complexity, and operates the autonomous vehicle using the selected control scheme.
    Type: Application
    Filed: March 26, 2019
    Publication date: October 1, 2020
    Inventors: Aashish N. Patel, Hyukseong Kwon, Amir M. Rahimi
  • Publication number: 20200310421
    Abstract: An autonomous vehicle, system and method of operating the autonomous vehicle. The system includes a performance evaluator, a decision module and a navigation system. The performance evaluator determines a performance grade for each of a plurality of decisions for operating the autonomous vehicle. The decision module selects a decision have a greatest performance grade. The navigation system operates the autonomous vehicle using the selected decision.
    Type: Application
    Filed: March 26, 2019
    Publication date: October 1, 2020
    Inventors: Hyukseong Kwon, Aashish N. Patel, Michael J. Daily
  • Patent number: 10733338
    Abstract: Methods, apparatus, and articles of manufacture are disclosed to generate a synthetic point cloud of a spacecraft. An example apparatus includes a point cloud generator to generate a first synthetic point cloud of a first simulated space vehicle based on a simulated illumination source and a simulated image sensor, where the simulated illumination source and the simulated image sensor is operatively coupled to a second simulated space vehicle at a first position, where the simulated image sensor measures a parameter of the first simulated space vehicle, where the simulated illumination source uses a first configuration.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: August 4, 2020
    Assignee: THE BOEING COMPANY
    Inventors: Hyukseong Kwon, Kyungnam Kim
  • Patent number: 10679355
    Abstract: Described is a system for detecting moving objects. During operation, the system obtains ego-motion velocity data of a moving platform and generates a predicted image of a scene proximate the moving platform by projecting three-dimensional (3D) data into an image plane based on pixel values of the scene. A contrast image is generated based on a difference between the predicted image and an actual image taken at a next step in time. Next, an actionable prediction map is then generated based on the contrast mage. Finally, one or more moving objects may be detected based on the actionable prediction map.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: June 9, 2020
    Assignee: HRL Laboratories, LLC
    Inventors: Kyungnam Kim, Hyukseong Kwon, Heiko Hoffmann
  • Publication number: 20200118281
    Abstract: A method for generating a 3D model point cloud of an object includes capturing a 2D image of the object and a 3D image of the object. The 3D image of the object includes a point cloud. The point cloud includes a multiplicity of points and includes a plurality of missing points or holes in the point cloud. The method additionally includes generating an upsampled 3D point cloud from the 3D image using local entropy data of the 2D image to fill at least some missing points or holes in the point cloud and merging a model point cloud from a previous viewpoint or location of a sensor platform and the upsampled 3D point cloud to create a new 3D model point cloud. The method further includes quantizing the new 3D point cloud to generate an updated 3D model point cloud.
    Type: Application
    Filed: October 10, 2018
    Publication date: April 16, 2020
    Inventors: Hyukseong Kwon, Kyungnam Kim
  • Patent number: 10614579
    Abstract: A method for generating a 3D model point cloud of an object includes capturing a 2D image of the object and a 3D image of the object. The 3D image of the object includes a point cloud. The point cloud includes a multiplicity of points and includes a plurality of missing points or holes in the point cloud. The method additionally includes generating an upsampled 3D point cloud from the 3D image using local entropy data of the 2D image to fill at least some missing points or holes in the point cloud and merging a model point cloud from a previous viewpoint or location of a sensor platform and the upsampled 3D point cloud to create a new 3D model point cloud. The method further includes quantizing the new 3D point cloud to generate an updated 3D model point cloud.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: April 7, 2020
    Assignee: The Boeing Company
    Inventors: Hyukseong Kwon, Kyungnam Kim
  • Patent number: 10606266
    Abstract: An apparatus is provided for tracking a target moving between states using an iterative process. The apparatus receives sensor data for a current state i, and applies a cubature information filter and an H-infinity filter thereto to respectively produce an estimate for the upcoming state i+1 and a measure of error thereof, and adjust the measure of error. The apparatus then defines a consensus estimate of the upcoming state i+1 and a consensus adjusted measure of error thereof from the estimate and adjusted measure of error, and a second estimate and second adjusted measure of error that is received from at least one second apparatus tracking the target. The apparatus then applies a cubature information filter to the consensus estimate of the upcoming state i+1 and the consensus adjusted measure of error to predict the upcoming state i+1.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: March 31, 2020
    Assignee: The Boeing Company
    Inventors: Hyukseong Kwon, David W. Payton, Chong Ding