Patents by Inventor Tae Eun Choe

Tae Eun Choe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961243
    Abstract: A geometric approach may be used to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have a value above a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
  • Publication number: 20240001957
    Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 4, 2024
    Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
  • Patent number: 11801861
    Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 31, 2023
    Assignee: NVIDIA Corporation
    Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
  • Publication number: 20230294727
    Abstract: In various examples, a hazard detection system plots hazard indicators from multiple detection sensors to grid cells of an occupancy grid corresponding to a driving environment. For example, as the ego-machine travels along a roadway, one or more sensors of the ego-machine may capture sensor data representing the driving environment. A system of the ego-machine may then analyze the sensor data to determine the existence and/or location of the one or more hazards within an occupancy grid—and thus within the environment. When a hazard is detected using a respective sensor, the system may plot an indicator of the hazard to one or more grid cells that correspond to the detected location of the hazard. Based, at least in part, on a fused or combined confidence of the hazard indicators for each grid cell, the system may predict whether the corresponding grid cell is occupied by a hazard.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Sangmin Oh, Baris Evrim Demiroz, Gang Pan, Dong Zhang, Joachim Pehserl, Samuel Rupp Ogden, Tae Eun Choe
  • Publication number: 20230282005
    Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Application
    Filed: May 1, 2023
    Publication date: September 7, 2023
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Patent number: 11704913
    Abstract: A list of images is received. The images were captured by a sensor of an ADV chronologically while driving through a driving environment. A first image of the images is identified that includes a first object in a first dimension (e.g., larger size) detected by an object detector using an object detection algorithm. In response to the detection of the first object, the images in the list are traversed backwardly in time from the first image to identify a second image that includes a second object in a second dimension (e.g., smaller size) based on a moving trail of the ADV represented by the list of images. The second object is then labeled or annotated in the second image equivalent to the first object in the first image. The list of images having the labeled second image can be utilized for subsequent object detection during autonomous driving.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: July 18, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Patent number: 11688181
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: June 27, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Patent number: 11679764
    Abstract: During the autonomous driving, the movement trails or moving history of obstacles, as well as, an autonomous driving vehicle (ADV) may be maintained in a corresponding buffer. For the obstacles and the ADV, the vehicle states at different points in time are maintained and stored in one or more buffers. The vehicle states representing the moving trails or moving history of the obstacles and the ADV may be utilized to reconstruct a history trajectory of the obstacles and the ADV, which may be used for a variety of purposes. For example, the moving trails or history of obstacles may be utilized to determine lane configuration of one or more lanes of a road, particularly, in a rural area where the lane markings are unclear. The moving history of the obstacles may also be utilized predict the future movement of the obstacles, tailgate an obstacle, and infer a lane line.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: June 20, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Publication number: 20230142299
    Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
    Type: Application
    Filed: November 10, 2021
    Publication date: May 11, 2023
    Inventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
  • Publication number: 20220122001
    Abstract: Approaches presented herein provide for the generation of synthetic data to fortify a dataset for use in training a network via imitation learning. In at least one embodiment, a system is evaluated to identify failure cases, such as may correspond to false positives and false negative detections. Additional synthetic data imitating these failure cases can then be generated and utilized to provide a more abundant dataset. A network or model can then be trained, or retrained, with the original training data and the additional synthetic data. In one or more embodiments, these steps may be repeated until the evaluation metric converges, with additional synthetic training data being generated corresponding to the failure cases at each training pass.
    Type: Application
    Filed: March 31, 2021
    Publication date: April 21, 2022
    Inventors: Tae Eun Choe, Aman Kishore, Junghyun Kwon, Minwoo Park, Pengfei Hao, Akshita Mittel
  • Patent number: 11227167
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) may be located. The image comprises a plurality of line indicators. The plurality of line indicators represent one or more lanes in the environment. The image is part of training data for a neural network. The method also includes determining a plurality of line segments based on the plurality of line indicators. The method further includes determining a vanishing point within the image based on the plurality of line segments. The method further includes updating one or more of the image or metadata associated with the image to indicate a location of the vanishing point within the image.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 18, 2022
    Assignee: BAIDU USA LLC
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Publication number: 20210406560
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 30, 2021
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Publication number: 20210309248
    Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
    Type: Application
    Filed: January 15, 2021
    Publication date: October 7, 2021
    Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
  • Patent number: 11120566
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining, using a first neural network, a plurality of line indicators based on the image. The plurality of line indicators represent one or more lanes in the environment. The method further includes determining, using a second neural network, a vanishing point within the image based on the plurality of line segments. The second neural network is communicatively coupled to the first neural network. The plurality of line indicators is determined simultaneously with the vanishing point. The method further includes calibrating one or more sensors based of the autonomous driving vehicle based on the vanishing point.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 14, 2021
    Assignee: BAIDU USA LLC
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Publication number: 20210264175
    Abstract: Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.
    Type: Application
    Filed: February 26, 2021
    Publication date: August 26, 2021
    Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
  • Patent number: 11055540
    Abstract: In one embodiment, a set of bounding box candidates are plotted onto a 2D space based on their respective dimension (e.g., widths and heights). The bounding box candidates are clustered on the 2D space based on the distribution density of the bounding box candidates. For each of the clusters of the bounding box candidates, an anchor box is determined to represent the corresponding cluster. A neural network model is trained based on the anchor boxes representing the clusters. The neural network model is utilized to detect or recognize objects based on images and/or point clouds captured by a sensor (e.g., camera, LIDAR, and/or RADAR) of an autonomous driving vehicle.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: July 6, 2021
    Assignee: BAIDU USA LLC
    Inventors: Ka Wai Tsoi, Tae Eun Choe, Yuliang Guo, Guang Chen, Weide Zhang
  • Patent number: 11042157
    Abstract: According to some embodiments, a system pre-processes, via a first thread, a captured image perceiving an environment surrounding the ADV obtained from an image capturing device of the ADV. The system processes, via a second thread, the pre-processed image with a corresponding depth image captured by a ranging device of the ADV using a machine learning model to detect vehicle lanes. The system post-processes, via a third thread, the detected vehicle lanes to track the vehicle lanes relative to the ADV. The system generates a trajectory based on a lane line of the tracked vehicle lanes to control the ADV autonomously according to the trajectory.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 22, 2021
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Jun Zhu, I-Kuei Chen, Guang Chen, Weide Zhang
  • Patent number: 10915766
    Abstract: In one embodiment, in addition to detecting or recognizing an actual lane, a virtual lane is determined based on the current state or motion prediction of an ADV. A virtual lane may or may not be identical or similar to the actual lane. A virtual lane may represent the likely movement of the ADV in a next time period given the current speed and heading direction of the vehicle. If an object is detected that may cross a lane line of the virtual lane and is a closest object to the ADV, the object is considered as a CIPO, and an emergency operation may be activated. That is, even though an object may not be in the path of an actual lane, if the object is in the path of a virtual lane of an ADV, the object may be considered as a CIPO and subject to a special operation.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 9, 2021
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Yuliang Guo, Guang Chen, Weide Zhang, Ka Wai Tsoi
  • Patent number: 10891747
    Abstract: In response to a first image captured by a camera of an ADV, a horizon line is determined based on the camera's hardware settings, representing a vanishing point based on an initial or default pitch angle of the camera. One or more lane lines are determined based on the first image via a perception process performed on the first image. In response to a first input signal received from an input device, a position of the horizon line is updated based on the first input signal and a position of at least one of the lane lines is updated based on the updated horizon line. The input signal may represent an incremental adjustment for adjusting the position of the horizon line. A first calibration factor or first correction value is determined for calibrating a pitch angle of the camera based on a difference between the initial horizon line and the updated horizon line.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 12, 2021
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Yuliang Guo, Guang Chen, Ka Wai Tsoi, Weide Zhang
  • Publication number: 20210004643
    Abstract: A list of images is received. The images were captured by a sensor of an ADV chronologically while driving through a driving environment. A first image of the images is identified that includes a first object in a first dimension (e.g., larger size) detected by an object detector using an object detection algorithm. In response to the detection of the first object, the images in the list are traversed backwardly in time from the first image to identify a second image that includes a second object in a second dimension (e.g., smaller size) based on a moving trail of the ADV represented by the list of images. The second object is then labeled or annotated in the second image equivalent to the first object in the first image. The list of images having the labeled second image can be utilized for subsequent object detection during autonomous driving.
    Type: Application
    Filed: July 2, 2019
    Publication date: January 7, 2021
    Inventors: Tae Eun CHOE, Guang CHEN, Weide ZHANG, Yuliang GUO, Ka Wai TSOI