Patents by Inventor Minwoo Park

Minwoo Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230211722
    Abstract: In various examples, high beam control for vehicles may be automated using a deep neural network (DNN) that processes sensor data received from vehicle sensors. The DNN may process the sensor data to output pixel-level semantic segmentation masks in order to differentiate actionable objects (e.g., vehicles with front or back lights lit, bicyclists, or pedestrians) from other objects (e.g., parked vehicles). Resulting segmentation masks output by the DNN(s), when combined with one or more post processing steps, may be used to generate masks for automated high beam on/off activation and/or dimming or shading—thereby providing additional illumination of an environment for the driver while controlling downstream effects of high beam glare for active vehicles.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 6, 2023
    Inventors: Jincheng Li, Minwoo Park
  • Publication number: 20230213945
    Abstract: In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed information—e.g., obstacle locations, path locations, etc.—from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object.
    Type: Application
    Filed: December 30, 2021
    Publication date: July 6, 2023
    Inventors: Neeraj Sajjan, Mehmet K. Kocamaz, Junghyun Kwon, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20230214654
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 6, 2023
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Patent number: 11688181
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: June 27, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Publication number: 20230188730
    Abstract: A video decoding method includes determining whether an ultimate motion vector expression (UMVE) mode is allowed for an upper data unit including a current block, when the UMVE mode is allowed for the upper data unit, determining whether the UMVE mode is applied to the current block, when the UMVE mode is applied to the current block, determining a base motion vector of the current block, determining a correction distance and a correction direction for correction of the base motion vector, determining a motion vector of the current block by correcting the base motion vector according to the correction distance and the correction direction, and reconstructing the current block based on the motion vector of the current block.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-soo Jeong, Minwoo Park
  • Patent number: 11676364
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: June 13, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yifang Xu, Xin Liu, Chia-Chih Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Publication number: 20230166733
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
    Type: Application
    Filed: January 31, 2023
    Publication date: June 1, 2023
    Inventors: Sayed Mehdi Sajjadi Mohammadabadi, Berta Rodriguez Hervas, Hang Dou, Igor Tryndin, David Nister, Minwoo Park, Neda Cvijetic, Junghyun Kwon, Trung Pham
  • Publication number: 20230171422
    Abstract: Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.
    Type: Application
    Filed: November 7, 2022
    Publication date: June 1, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anish TAMSE, Minwoo Park, Minsoo Park
  • Patent number: 11657532
    Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: May 23, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Yue Wu, Michael Grabner, Cheng-Chieh Yang
  • Publication number: 20230152801
    Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment - e.g., for updating a world model - in a variety of autonomous machine applications.
    Type: Application
    Filed: January 6, 2023
    Publication date: May 18, 2023
    Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
  • Patent number: 11648945
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: May 16, 2023
    Assignee: NVIDIA Corporation
    Inventors: Sayed Mehdi Sajjadi Mohammadabadi, Berta Rodriguez Hervas, Hang Dou, Igor Tryndin, David Nister, Minwoo Park, Neda Cvijetic, Junghyun Kwon, Trung Pham
  • Patent number: 11651215
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: May 16, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Publication number: 20230146358
    Abstract: Provided are a video decoding method and apparatus, in which, during video encoding and decoding processes: in which a first bin of a sub-block merge index indicating a candidate motion vector of a sub-block merge mode is obtained, the first bin being arithmetic-encoded using a context model, a second arithmetic-encoded in a bypass mode is obtained based on a first value obtained by arithmetic-decoding the first bin by using the context model, a second value is obtained by arithmetic-decoding the second bin in the bypass mode, and prediction on a current block is performed in the sub-block merge mode, based on the first value and the second value.
    Type: Application
    Filed: December 30, 2022
    Publication date: May 11, 2023
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Anish TAMSE, Minwoo PARK
  • Publication number: 20230145364
    Abstract: Provided are a video decoding method and apparatus for obtaining, from a bitstream, information about a first motion vector of a current block, determining, based on the information about the first motion vector, the first motion vector, determining a candidate list including a plurality of candidate prediction motion vectors for determining a second motion vector of the current block, determining, based on a distance between each of the plurality of candidate prediction motion vectors and the first motion vector, one of the plurality of candidate prediction motion vectors as the second motion vector, and determining a motion vector of the current block by using the first motion vector and the second motion vector.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 11, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo PARK, Minwoo PARK, Ilkoo KIM, Kwangpyo CHOI
  • Publication number: 20230136860
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a parametric mathematical modeling. A variety of synthetic 3D road surfaces may be generated by modeling a 3D road surface using varied parameters to simulate changes in road direction and lateral surface slope. In an example embodiment, a synthetic 3D road surface may be created by modeling a longitudinal 3D curve and expanding the longitudinal 3D curve to a 3D surface, and the resulting synthetic 3D surface may be sampled to form a synthetic ground truth projection image (e.g., a 2D height map). To generate corresponding input training data, a known pattern that represents which pixels may remain unobserved during 3D structure estimation may be generated and applied to a ground truth projection image to simulate a corresponding sparse projection image.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 4, 2023
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Publication number: 20230139772
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a simulated environment. For example, a simulation may be run to simulate a virtual world or environment, render frames of virtual sensor data (e.g., images), and generate corresponding depth maps and segmentation masks (identifying a component of the simulated environment such as a road). To generate input training data, 3D structure estimation may be performed on a rendered frame to generate a representation of a 3D surface structure of the road. To generate corresponding ground truth training data, a corresponding depth map and segmentation mask may be used to generate a dense representation of the 3D surface structure.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 4, 2023
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Publication number: 20230135234
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated from real-world data. For example, one or more vehicles may collect image data and LiDAR data while navigating through a real-world environment. To generate input training data, 3D surface structure estimation may be performed on captured image data to generate a sparse representation of a 3D surface structure of interest (e.g., a 3D road surface). To generate corresponding ground truth training data, captured LiDAR data may be smoothed, subject to outlier removal, subject to triangulation to filling missing values, accumulated from multiple LiDAR sensors, aligned with corresponding frames of image data, and/or annotated to identify 3D points on the 3D surface of interest, and the identified 3D points may be projected to generate a dense representation of the 3D surface structure.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 4, 2023
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Publication number: 20230135088
    Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the estimated representation may be sparse, a deep neural network (DNN) may be used to predict values for a dense representation of the 3D surface structure from the sparse representation. For example, a sparse 3D point cloud may be projected to form a sparse projection image (e.g., a sparse 2D height map), which may be fed into the DNN to predict a dense projection image (a dense 21) height map). The predicted dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 4, 2023
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Publication number: 20230136235
    Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the representation may be sparse, one or more densification techniques may be applied to densify the representation of the 3D surface structure. For example, the relationship between sparse and dense projection images (e.g., 2D height maps) may be modeled with a Markov random field, and Maximum a Posterior (MAP) inference may be performed using a corresponding joint probability distribution to estimate the most likely dense values given the sparse values. The resulting dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 28, 2021
    Publication date: May 4, 2023
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Publication number: 20230130814
    Abstract: In examples, autonomous vehicles are enabled to negotiate yield scenarios in a safe and predictable manner. In response to detecting a yield scenario, a wait element data structure is generated that encodes geometries of an ego path, a contender path that includes at least one contention point with the ego path, as well as a state of contention associated with the at least on contention point. Geometry of yield scenario context may also be encoded, such as inside ground of an intersection, entry or exit lines, etc. The wait element data structure is passed to a yield planner of the autonomous vehicle. The yield planner determines a yielding behavior for the autonomous vehicle based at least on the wait element data structure. A control system of the autonomous vehicle may operate the autonomous vehicle in accordance with the yield behavior, such that the autonomous vehicle safely negotiates the yield scenario.
    Type: Application
    Filed: October 27, 2021
    Publication date: April 27, 2023
    Inventors: David Nister, Minwoo Park, Miguel Sainz Serra, Vaibhav Thukral, Berta Rodriguez Hervas