Patents by Inventor Minwoo Park

Minwoo Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11770531
    Abstract: Provided are a video decoding method and apparatus including: in a video encoding and decoding process, determining parity information of a current block based on a width and a height of the current block; determining a lookup table of the current block from among a plurality of predefined lookup tables based on the parity information; determining a dequantization scale value of the current block based on the lookup table of the current block; and performing dequantization on the current block by using the dequantization scale value.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: September 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Narae Choi, Minwoo Park
  • Patent number: 11765354
    Abstract: Provided are a video decoding method and apparatus including: in a video encoding and decoding process, determining parity information of a current block based on a width and a height of the current block; determining a lookup table of the current block from among a plurality of predefined lookup tables based on the parity information; determining a dequantization scale value of the current block based on the lookup table of the current block; and performing dequantization on the current block by using the dequantization scale value.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: September 19, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Narae Choi, Minwoo Park
  • Publication number: 20230282005
    Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Application
    Filed: May 1, 2023
    Publication date: September 7, 2023
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Publication number: 20230281458
    Abstract: A method and an electronic device for low-complexity in-loop filter inference using feature-augmented training are provided. The method includes combining spatial and spectral domain features, using spectral domain features for global feature extraction and signalling to the spatial stream during training, using a detachable spectral domain stream for differential complexity during training versus inference, and combining a unique set of losses resulting from multi-stream and multi-feature approaches to obtain an optimal output.
    Type: Application
    Filed: February 27, 2023
    Publication date: September 7, 2023
    Inventors: Aviral AGRAWAL, Raj Narayana GADDE, Anubhav SINGH, Yinji PIAO, Minwoo PARK, Kwangpyo CHOI
  • Patent number: 11743467
    Abstract: Provided is a method of decoding coefficients included in image data, the method including determining a Rice parameter for a current coefficient, based on a base level of the current coefficient; parsing coefficient level information indicating a size of the current coefficient from a bitstream by using the determined Rice parameter; and identifying the size of the current coefficient by de-binarizing the coefficient level information by using the determined Rice parameter.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: August 29, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yinji Piao, Gahyun Ryu, Minsoo Park, Minwoo Park, Seungsoo Jeong, Kiho Choi, Narae Choi, Woongil Choi, Anish Tamse
  • Publication number: 20230267701
    Abstract: In various examples, sensor data representative of an image of a field of view of a vehicle sensor may be received and the sensor data may be applied to a machine learning model. The machine learning model may compute a segmentation mask representative of portions of the image corresponding to lane markings of the driving surface of the vehicle. Analysis of the segmentation mask may be performed to determine lane marking types, and lane boundaries may be generated by performing curve fitting on the lane markings corresponding to each of the lane marking types. The data representative of the lane boundaries may then be sent to a component of the vehicle for use in navigating the vehicle through the driving surface.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Yifang Xu, Xin Liu, Chia-Chin Chen, Carolina Parada, Davide Onofrio, Minwoo Park, Mehdi Sajjadi Mohammadabadi, Vijay Chintalapudi, Ozan Tonkal, John Zedlewski, Pekka Janis, Jan Nikolaus Fritsch, Gordon Grigor, Zuoguan Wang, I-Kuei Chen, Miguel Sainz
  • Patent number: 11736682
    Abstract: Provided are a video decoding method and apparatus which, during video encoding and decoding processes, determine whether a current block is in contact with an upper boundary of a largest coding unit including the current block, when it is determined that the current block is in contact with the upper boundary of the largest coding unit, determine an upper reference line of the current block as one reference line, when it is determined that the current block is not in contact with the upper boundary of the largest coding unit, determine the upper reference line of the current block based on N reference lines, and use the determined upper reference line.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: August 22, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Narae Choi, Minsoo Park, Minwoo Park, Seungsoo Jeong, Kiho Choi, Woongil Choi, Anish Tamse, Yinji Piao
  • Patent number: 11736692
    Abstract: An image decoding method including: obtaining, from a bitstream, information related to a triangle prediction mode for a current block; splitting the current block into two triangular partitions, according to the information related to a triangle prediction mode; generating a merge list for a triangle prediction mode, according to a merge list generation method in a regular merge mode; selecting a motion vector for the two triangular partitions according to information indicating the motion vector from among motion vectors included in the merge list; obtaining, from a reference image, prediction blocks corresponding to the two triangular partitions, based on the motion vector; and reconstructing the current block, based on a final prediction block.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 22, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seungsoo Jeong, Anish Tamse, Minsoo Park, Minwoo Park
  • Patent number: 11722673
    Abstract: Provided is a video decoding method including: obtaining, from a bitstream, intra prediction mode information indicating an intra prediction mode of a current block; determining an interpolation filter set to be used in prediction of the current block, based on at least one of a size of the current block and the intra prediction mode indicated by the intra prediction mode information; determining a reference location to which a current sample of the current block refers according to the intra prediction mode; determining, in the interpolation filter set, an interpolation filter that corresponds to the reference location; determining a prediction value of the current sample, according to reference samples of the current block and the interpolation filter; and reconstructing the current block, based on the prediction value of the current sample.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 8, 2023
    Assignee: SAMSUNG ELEOTRONICS CO., LTD.
    Inventors: Narae Choi, Minwoo Park, Minsoo Park, Seungsoo Jeong, Kiho Choi, Woongil Choi, Anish Tamse, Yinji Piao
  • Patent number: 11716460
    Abstract: Disclosed is an image decoding method according to an embodiment, the image decoding method including: obtaining a first reference block and a second reference block, for bi-directional prediction of a current block; obtaining, from a bitstream, weight information for combining the first reference block with the second reference block; performing entropy decoding on the weight information to obtain a weight index; combining the first reference block with the second reference block according to a candidate value indicated by the weight index among candidate values included in a weight candidate group; and reconstructing the current block based on a result of the combining, wherein a first binary value corresponding to the weight index is entropy-decoded based on a context model, and the remaining binary value corresponding to the weight index is entropy-decoded by a bypass method.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: August 1, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seungsoo Jeong, Gahyun Ryu, Minsoo Park, Minwoo Park, Kiho Choi, Narae Choi, Woongil Choi, Anish Tamse, Yinji Piao
  • Publication number: 20230232038
    Abstract: Provided is a video decoding method including: obtaining, from a sequence parameter set, sequence merge mode with motion vector difference (sequence MMVD) information indicating whether an MMVD mode is applicable in a current sequence; when the MMVD mode is applicable according to the sequence MMVD information, obtaining, from a bitstream, first MMVD information indicating whether the MMVD mode is applied in a first inter prediction mode for a current block included in the current sequence; when the MMVD mode is applicable in the first inter prediction mode according to the first MMVD information, reconstructing a motion vector of the current block which is to be used in the first inter prediction mode, by using a distance of a motion vector difference and a direction of a motion vector difference obtained from the bitstream; and reconstructing the current block by using the motion vector of the current block.
    Type: Application
    Filed: March 17, 2023
    Publication date: July 20, 2023
    Applicant: Samsung Electronics CO.,LTD
    Inventors: Seungsoo JEONG, Minwoo PARK
  • Publication number: 20230230273
    Abstract: In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 20, 2023
    Inventors: Minwoo Park, Yue Wu, Michael Grabner, Cheng-Chieh Yang
  • Publication number: 20230232023
    Abstract: Provided is an image decoding method including determining a plurality of coding units in a chroma image by hierarchically splitting the chroma image, based on a split shape mode of blocks in the chroma image of a current image, and decoding the current image, based on the plurality of coding units in the chroma image. In this regard, the determining of the plurality of coding units in the chroma image may include, when a size or an area of a chroma block from among a plurality of chroma blocks to be generated by splitting a current chroma block in the chroma image is equal to or smaller than a preset size or a preset area, not allowing splitting of the current chroma block based on a split shape mode of the current chroma block, and determining at least one coding unit included in the current chroma block.
    Type: Application
    Filed: January 27, 2023
    Publication date: July 20, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minwoo PARK, Minsoo PARK, Kiho CHOI, Narae CHOI, Woongil CHOI, Chanyul KIM, Seungsoo JEONG, Anish TAMSE, Yinji PIAO
  • Patent number: 11704890
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: July 18, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11700370
    Abstract: Provided is a video decoding method including: generating coding units by splitting at least one of a height and a width of a largest coding unit having a first size; based on whether a height or a width of a non-square first coding unit including an outer boundary of an image, among the coding units, is greater than a maximum transform size, determining whether it is allowed to generate two second coding units by splitting at least one of the height and the width of the first coding unit; and decoding the second coding units generated from the first coding unit.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: July 11, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minwoo Park, Minsoo Park, Anish Tamse
  • Publication number: 20230213945
    Abstract: In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed information—e.g., obstacle locations, path locations, etc.—from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object.
    Type: Application
    Filed: December 30, 2021
    Publication date: July 6, 2023
    Inventors: Neeraj Sajjan, Mehmet K. Kocamaz, Junghyun Kwon, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20230214654
    Abstract: In various examples, one or more deep neural networks (DNNs) are executed to regress on control points of a curve, and the control points may be used to perform a curve fitting operation—e.g., Bezier curve fitting—to identify landmark locations and geometries in an environment. The outputs of the DNN(s) may thus indicate the two-dimensional (2D) image-space and/or three-dimensional (3D) world-space control point locations, and post-processing techniques—such as clustering and temporal smoothing—may be executed to determine landmark locations and poses with precision and in real-time. As a result, reconstructed curves corresponding to the landmarks—e.g., lane line, road boundary line, crosswalk, pole, text, etc.—may be used by a vehicle to perform one or more operations for navigating an environment.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 6, 2023
    Inventors: Minwoo Park, Yilin Yang, Xiaolin Lin, Abhishek Bajpayee, Hae-Jong Seo, Eric Jonathan Yuan, Xudong Chen
  • Publication number: 20230211722
    Abstract: In various examples, high beam control for vehicles may be automated using a deep neural network (DNN) that processes sensor data received from vehicle sensors. The DNN may process the sensor data to output pixel-level semantic segmentation masks in order to differentiate actionable objects (e.g., vehicles with front or back lights lit, bicyclists, or pedestrians) from other objects (e.g., parked vehicles). Resulting segmentation masks output by the DNN(s), when combined with one or more post processing steps, may be used to generate masks for automated high beam on/off activation and/or dimming or shading—thereby providing additional illumination of an environment for the driver while controlling downstream effects of high beam glare for active vehicles.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 6, 2023
    Inventors: Jincheng Li, Minwoo Park
  • Patent number: 11688181
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: June 27, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Publication number: 20230188730
    Abstract: A video decoding method includes determining whether an ultimate motion vector expression (UMVE) mode is allowed for an upper data unit including a current block, when the UMVE mode is allowed for the upper data unit, determining whether the UMVE mode is applied to the current block, when the UMVE mode is applied to the current block, determining a base motion vector of the current block, determining a correction distance and a correction direction for correction of the base motion vector, determining a motion vector of the current block by correcting the base motion vector according to the correction distance and the correction direction, and reconstructing the current block based on the motion vector of the current block.
    Type: Application
    Filed: February 6, 2023
    Publication date: June 15, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-soo Jeong, Minwoo Park