Patents by Inventor Hae Jong Seo
Hae Jong Seo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250138530Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.Type: ApplicationFiled: December 30, 2024Publication date: May 1, 2025Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
-
Patent number: 12248319Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.Type: GrantFiled: June 23, 2023Date of Patent: March 11, 2025Assignee: NVIDIA CorporationInventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
-
Publication number: 20250029357Abstract: In various examples, contrast values corresponding to pixels of one or more images generated using one or more sensors of a vehicle may be computed to detect and identify objects that trigger glare mitigating operations. Pixel luminance values are determined and used to compute a contrast value based on comparing the pixel luminance values to a reference luminance value that is based on a set of the pixels and the corresponding luminance values. A contrast threshold may be applied to the computed contrast values to identify glare in the image data to trigger glare mitigating operations so that the vehicle may modify the configuration of one or more illumination sources so as to reduce glare experienced by occupants and/or sensors of the vehicle.Type: ApplicationFiled: September 30, 2024Publication date: January 23, 2025Inventors: Igor Tryndin, Abhishek Bajpayee, Yu Wang, Hae-Jong Seo
-
Publication number: 20250022217Abstract: Systems and methods are disclosed that relate to object detection and to generating detected object representations. Sensor data corresponding to a scene may be obtained that may represent one or more objects. A tensor may be generated based at least on the sensor data, where the tensor may represent the one or more objects and may include respective predicted 3D characteristics of the one or more objects. The tensor may be represented in 2D space and may be decoded to generate 3D representations of objects using, for example, one or more curve fitting algorithms.Type: ApplicationFiled: July 13, 2023Publication date: January 16, 2025Inventors: Abhishek Bajpayee, Sai Krishnan Chandrasekar, Xudong Chen, Hae Jong Seo, Siddharth Kothiyal
-
Publication number: 20240410705Abstract: In various examples, path detection using machine learning models for autonomous or semi-autonomous systems and applications is described herein. Systems and methods are disclosed that use one or more machine learning models to determine a geometry associated with a path for a vehicle. To determine the geometry, the machine learning model(s) may process sensor data generated using the vehicle and, based at least on the processing, output points associated with the path. In some examples, the machine learning model(s) outputs a limited number of points, such as between five and twenty points. One or more algorithms, such as one or more Bezier algorithms, may then be used to generate the geometry based at least on the points. As such, in some examples, the geometry may correspond to a Bezier curve that represents the path.Type: ApplicationFiled: June 6, 2023Publication date: December 12, 2024Inventors: Trung Pham, Minwoo Park, Ha Giang Truong, Atchuta Venkata Vijay Chintalapudi, Hae-Jong Seo
-
Patent number: 12136249Abstract: In various examples, contrast values corresponding to pixels of one or more images generated using one or more sensors of a vehicle may be computed to detect and identify objects that trigger glare mitigating operations. Pixel luminance values are determined and used to compute a contrast value based on comparing the pixel luminance values to a reference luminance value that is based on a set of the pixels and the corresponding luminance values. A contrast threshold may be applied to the computed contrast values to identify glare in the image data to trigger glare mitigating operations so that the vehicle may modify the configuration of one or more illumination sources so as to reduce glare experienced by occupants and/or sensors of the vehicle.Type: GrantFiled: December 13, 2021Date of Patent: November 5, 2024Assignee: NVIDIA CorporationInventors: Igor Tryndin, Abhishek Bajpayee, Yu Wang, Hae-Jong Seo
-
Publication number: 20240339035Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.Type: ApplicationFiled: June 17, 2024Publication date: October 10, 2024Inventors: Davide Marco Onofrio, Hae-Jong Seo, David Nister, Minwoo Park, Neda Cvijetic
-
Patent number: 12051332Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections.Type: GrantFiled: September 8, 2022Date of Patent: July 30, 2024Assignee: NVIDIA CorporationInventors: Davide Marco Onofrio, Hae-Jong Seo, David Nister, Minwoo Park, Neda Cvijetic
-
Publication number: 20240169549Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.Type: ApplicationFiled: January 26, 2024Publication date: May 23, 2024Inventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
-
Patent number: 11941819Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.Type: GrantFiled: December 6, 2021Date of Patent: March 26, 2024Assignee: NVIDIA CorporationInventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
-
Patent number: 11921502Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.Type: GrantFiled: January 6, 2023Date of Patent: March 5, 2024Assignee: NVIDIA CorporationInventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
-
Publication number: 20230333553Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.Type: ApplicationFiled: June 23, 2023Publication date: October 19, 2023Inventors: Minwoo Park, Xiaoin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
-
Publication number: 20230282005Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.Type: ApplicationFiled: May 1, 2023Publication date: September 7, 2023Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
-
Publication number: 20230214654Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.Type: ApplicationFiled: February 27, 2023Publication date: July 6, 2023Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
-
Patent number: 11688181Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.Type: GrantFiled: June 21, 2021Date of Patent: June 27, 2023Assignee: NVIDIA CorporationInventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
-
Publication number: 20230186593Abstract: In various examples, contrast values corresponding to pixels of one or more images generated using one or more sensors of a vehicle may be computed to detect and identify objects that trigger glare mitigating operations. Pixel luminance values are determined and used to compute a contrast value based on comparing the pixel luminance values to a reference luminance value that is based on a set of the pixels and the corresponding luminance values. A contrast threshold may be applied to the computed contrast values to identify glare in the image data to trigger glare mitigating operations so that the vehicle may modify the configuration of one or more illumination sources so as to reduce glare experienced by occupants and/or sensors of the vehicle.Type: ApplicationFiled: December 13, 2021Publication date: June 15, 2023Inventors: Igor Tryndin, Abhishek Bajpayee, Yu Wang, Hae-Jong Seo
-
Publication number: 20230177839Abstract: In various examples, methods and systems are provided for determining, using a machine learning model, one or more of the following operational domain conditions related to an autonomous and/or semi-autonomous machine: amount of camera blindness, blindness classification, illumination level, path surface condition, visibility distance, scene type classification, and distance to a scene. Once one or more of these conditions are determined, an operational level of the machine may be determined, and the machine may be controlled according to the operational level.Type: ApplicationFiled: December 2, 2021Publication date: June 8, 2023Inventors: Abhishek Bajpayee, Arjun Gupta, Dylan Doblar, Hae-Jong Seo, George Tang, Keerthi Raj Nagaraja
-
Publication number: 20230152801Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment - e.g., for updating a world model - in a variety of autonomous machine applications.Type: ApplicationFiled: January 6, 2023Publication date: May 18, 2023Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
-
Patent number: 11651215Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.Type: GrantFiled: December 2, 2020Date of Patent: May 16, 2023Assignee: NVIDIA CorporationInventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
-
Publication number: 20230110027Abstract: In various examples, systems and methods are disclosed that use one or more machine learning models (MLMs) - such as deep neural networks (DNNs) - to compute outputs indicative of an estimated visibility distance corresponding to sensor data generated using one or more sensors of an autonomous or semi-autonomous machine. Once the visibility distance is computed using the one or more MLMs, a determination of the usability of the sensor data for one or more downstream tasks of the machine may be evaluated. As such, where an estimated visibility distance is low, the corresponding sensor data may be relied upon for less tasks than when the visibility distance is high.Type: ApplicationFiled: September 29, 2021Publication date: April 13, 2023Inventors: Abhishek Bajpayee, Arjun Gupta, George Tang, Hae-Jong Seo