Patents by Inventor Tae Eun Choe
Tae Eun Choe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240317263Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training one or more layers of the 3D perception network as part of a training network using real source rig data and simulated source and target rig data. Feature statistics extracted from the real source data may be used to transform the features extracted from the simulated data during training. The paths for real and simulated data through the resulting network may be alternately trained on real and simulated data to update shared weights for the different paths. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.Type: ApplicationFiled: May 31, 2024Publication date: September 26, 2024Inventors: Ahyun SEO, Tae Eun Choe, Minwoo Park, Jung Seock Joo
-
Publication number: 20240320923Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training the one or more layers of the 3D perception network as part of a training network using simulated source and target rig data. A consistency loss that compares (e.g., top-down) transformed feature maps extracted from simulated source and target rig data may be used to minimize differences across training channels. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.Type: ApplicationFiled: May 31, 2024Publication date: September 26, 2024Inventors: Ahyun SEO, Tae Eun Choe, Minwoo Park, Jung Seock Joo
-
Publication number: 20240312123Abstract: In various examples, systems and methods are disclosed that relate to data augmentation for training/updating perception models in autonomous or semi-autonomous systems and applications. For example, a system may receive data associated with a set of frames that are captured using a plurality of cameras positioned in fixed relation relative to the machine; generate a panoramic view based at least on the set of frames; provide data associated with the panoramic view to a model to cause the model to generate a high dynamic range (HDR) panoramic view; determine lighting information associated with a light distribution map based at least on the HDR panoramic view; determine a virtual scene; and render an asset and a shadow on at least one of the frames, based at least on the virtual scene and the light distribution map, the shadow being a shadow corresponding to the asset.Type: ApplicationFiled: February 29, 2024Publication date: September 19, 2024Applicant: NVIDIA CorporationInventors: Malik Aqeel Anwar, Tae Eun Choe, Zian Wang, Sanja Fidler, Minwoo Park
-
Publication number: 20240265555Abstract: Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. Subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.Type: ApplicationFiled: March 22, 2024Publication date: August 8, 2024Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
-
Patent number: 11961243Abstract: A geometric approach may be used to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have a value above a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring.Type: GrantFiled: February 26, 2021Date of Patent: April 16, 2024Assignee: NVIDIA CorporationInventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
-
Publication number: 20240001957Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.Type: ApplicationFiled: September 14, 2023Publication date: January 4, 2024Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
-
Patent number: 11801861Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.Type: GrantFiled: January 15, 2021Date of Patent: October 31, 2023Assignee: NVIDIA CorporationInventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
-
Publication number: 20230294727Abstract: In various examples, a hazard detection system plots hazard indicators from multiple detection sensors to grid cells of an occupancy grid corresponding to a driving environment. For example, as the ego-machine travels along a roadway, one or more sensors of the ego-machine may capture sensor data representing the driving environment. A system of the ego-machine may then analyze the sensor data to determine the existence and/or location of the one or more hazards within an occupancy grid—and thus within the environment. When a hazard is detected using a respective sensor, the system may plot an indicator of the hazard to one or more grid cells that correspond to the detected location of the hazard. Based, at least in part, on a fused or combined confidence of the hazard indicators for each grid cell, the system may predict whether the corresponding grid cell is occupied by a hazard.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Sangmin Oh, Baris Evrim Demiroz, Gang Pan, Dong Zhang, Joachim Pehserl, Samuel Rupp Ogden, Tae Eun Choe
-
Publication number: 20230282005Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.Type: ApplicationFiled: May 1, 2023Publication date: September 7, 2023Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
-
Patent number: 11704913Abstract: A list of images is received. The images were captured by a sensor of an ADV chronologically while driving through a driving environment. A first image of the images is identified that includes a first object in a first dimension (e.g., larger size) detected by an object detector using an object detection algorithm. In response to the detection of the first object, the images in the list are traversed backwardly in time from the first image to identify a second image that includes a second object in a second dimension (e.g., smaller size) based on a moving trail of the ADV represented by the list of images. The second object is then labeled or annotated in the second image equivalent to the first object in the first image. The list of images having the labeled second image can be utilized for subsequent object detection during autonomous driving.Type: GrantFiled: July 2, 2019Date of Patent: July 18, 2023Assignee: BAIDU USA LLCInventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
-
Patent number: 11688181Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.Type: GrantFiled: June 21, 2021Date of Patent: June 27, 2023Assignee: NVIDIA CorporationInventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
-
Patent number: 11679764Abstract: During the autonomous driving, the movement trails or moving history of obstacles, as well as, an autonomous driving vehicle (ADV) may be maintained in a corresponding buffer. For the obstacles and the ADV, the vehicle states at different points in time are maintained and stored in one or more buffers. The vehicle states representing the moving trails or moving history of the obstacles and the ADV may be utilized to reconstruct a history trajectory of the obstacles and the ADV, which may be used for a variety of purposes. For example, the moving trails or history of obstacles may be utilized to determine lane configuration of one or more lanes of a road, particularly, in a rural area where the lane markings are unclear. The moving history of the obstacles may also be utilized predict the future movement of the obstacles, tailgate an obstacle, and infer a lane line.Type: GrantFiled: June 28, 2019Date of Patent: June 20, 2023Assignee: BAIDU USA LLCInventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
-
Publication number: 20230142299Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.Type: ApplicationFiled: November 10, 2021Publication date: May 11, 2023Inventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
-
Publication number: 20220122001Abstract: Approaches presented herein provide for the generation of synthetic data to fortify a dataset for use in training a network via imitation learning. In at least one embodiment, a system is evaluated to identify failure cases, such as may correspond to false positives and false negative detections. Additional synthetic data imitating these failure cases can then be generated and utilized to provide a more abundant dataset. A network or model can then be trained, or retrained, with the original training data and the additional synthetic data. In one or more embodiments, these steps may be repeated until the evaluation metric converges, with additional synthetic training data being generated corresponding to the failure cases at each training pass.Type: ApplicationFiled: March 31, 2021Publication date: April 21, 2022Inventors: Tae Eun Choe, Aman Kishore, Junghyun Kwon, Minwoo Park, Pengfei Hao, Akshita Mittel
-
Patent number: 11227167Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) may be located. The image comprises a plurality of line indicators. The plurality of line indicators represent one or more lanes in the environment. The image is part of training data for a neural network. The method also includes determining a plurality of line segments based on the plurality of line indicators. The method further includes determining a vanishing point within the image based on the plurality of line segments. The method further includes updating one or more of the image or metadata associated with the image to indicate a location of the vanishing point within the image.Type: GrantFiled: June 28, 2019Date of Patent: January 18, 2022Assignee: BAIDU USA LLCInventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
-
Publication number: 20210406560Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.Type: ApplicationFiled: June 21, 2021Publication date: December 30, 2021Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
-
Publication number: 20210309248Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.Type: ApplicationFiled: January 15, 2021Publication date: October 7, 2021Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
-
Patent number: 11120566Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining, using a first neural network, a plurality of line indicators based on the image. The plurality of line indicators represent one or more lanes in the environment. The method further includes determining, using a second neural network, a vanishing point within the image based on the plurality of line segments. The second neural network is communicatively coupled to the first neural network. The plurality of line indicators is determined simultaneously with the vanishing point. The method further includes calibrating one or more sensors based of the autonomous driving vehicle based on the vanishing point.Type: GrantFiled: June 28, 2019Date of Patent: September 14, 2021Assignee: BAIDU USA LLCInventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
-
Publication number: 20210264175Abstract: Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.Type: ApplicationFiled: February 26, 2021Publication date: August 26, 2021Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
-
Patent number: 11055540Abstract: In one embodiment, a set of bounding box candidates are plotted onto a 2D space based on their respective dimension (e.g., widths and heights). The bounding box candidates are clustered on the 2D space based on the distribution density of the bounding box candidates. For each of the clusters of the bounding box candidates, an anchor box is determined to represent the corresponding cluster. A neural network model is trained based on the anchor boxes representing the clusters. The neural network model is utilized to detect or recognize objects based on images and/or point clouds captured by a sensor (e.g., camera, LIDAR, and/or RADAR) of an autonomous driving vehicle.Type: GrantFiled: June 28, 2019Date of Patent: July 6, 2021Assignee: BAIDU USA LLCInventors: Ka Wai Tsoi, Tae Eun Choe, Yuliang Guo, Guang Chen, Weide Zhang