Patents by Inventor Minwoo Park

Minwoo Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230122119
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: December 16, 2022
    Publication date: April 20, 2023
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Publication number: 20230121088
    Abstract: Provided is a video decoding method including: generating a merge candidate list including neighboring blocks referred to predict a motion vector of a current block in a skip mode or a merge mode; when a merge motion vector difference is used according to merge difference mode information indicating whether the merge motion vector difference and a motion vector determined from the merge candidate list are used, determining a base motion vector from a candidate determined among the merge candidate list based on merge candidate information; determining the motion vector of the current block by using the base motion vector and a merge motion vector difference of the current block, the merge motion vector difference being determined by using a distance index and direction index of the merge motion vector difference of the current block; and reconstructing the current block by using the motion vector of the current block.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 20, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seungsoo JEONG, Minwoo Park, Kiho Choi, Anish Tamse
  • Patent number: 11627335
    Abstract: Provided is a video decoding method including: obtaining, from a sequence parameter set, sequence merge mode with motion vector difference (sequence MMVD) information indicating whether an MMVD mode is applicable in a current sequence; when the MMVD mode is applicable according to the sequence MMVD information, obtaining, from a bitstream, first MMVD information indicating whether the MMVD mode is applied in a first inter prediction mode for a current block included in the current sequence; when the MMVD mode is applicable in the first inter prediction mode according to the first MMVD information, reconstructing a motion vector of the current block which is to be used in the first inter prediction mode, by using a distance of a motion vector difference and a direction of a motion vector difference obtained from the bitstream; and reconstructing the current block by using the motion vector of the current block.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: April 11, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seungsoo Jeong, Minwoo Park
  • Publication number: 20230103665
    Abstract: Provided is an image decoding method including: obtaining, from a sequence parameter set of a bitstream, information indicating a plurality of first reference image lists for an image sequence including a current image; obtaining, from a group header of the bitstream, an indicator for a current block group including a current block in the current image; obtaining a second reference image list based on a first reference image list indicated by the indicator; and prediction-decoding a lower block of the current block based on a reference image included in the second reference image list.
    Type: Application
    Filed: February 28, 2020
    Publication date: April 6, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD
    Inventors: Woongil CHOI, Minsoo PARK, Minwoo PARK, Seungsoo JEONG, Kiho CHOI, Narae SHOI, Anish TAMSE, Yinji PIAO
  • Publication number: 20230099494
    Abstract: In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.
    Type: Application
    Filed: September 29, 2021
    Publication date: March 30, 2023
    Inventors: Mehmet Kocamaz, Neeraj Sajjan, Sangmin Oh, David Nister, Junghyun Kwon, Minwoo Park
  • Patent number: 11616963
    Abstract: Provided is an image decoding method including determining a plurality of coding units in a chroma image by hierarchically splitting the chroma image, based on a split shape mode of blocks in the chroma image of a current image, and decoding the current image, based on the plurality of coding units in the chroma image. In this regard, the determining of the plurality of coding units in the chroma image may include, when a size or an area of a chroma block from among a plurality of chroma blocks to be generated by splitting a current chroma block in the chroma image is equal to or smaller than a preset size or a preset area, not allowing splitting of the current chroma block based on a split shape mode of the current chroma block, and determining at least one coding unit included in the current chroma block.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: March 28, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minwoo Park, Minsoo Park, Kiho Choi, Narae Choi, Woongil Choi, Chanyul Kim, Seungsoo Jeong, Anish Tamse, Yinji Piao
  • Patent number: 11613201
    Abstract: In various examples, high beam control for vehicles may be automated using a deep neural network (DNN) that processes sensor data received from vehicle sensors. The DNN may process the sensor data to output pixel-level semantic segmentation masks in order to differentiate actionable objects (e.g., vehicles with front or back lights lit, bicyclists, or pedestrians) from other objects (e.g., parked vehicles). Resulting segmentation masks output by the DNN(s), when combined with one or more post processing steps, may be used to generate masks for automated high beam on/off activation and/or dimming or shading—thereby providing additional illumination of an environment for the driver while controlling downstream effects of high beam glare for active vehicles.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: March 28, 2023
    Assignee: NVIDIA Corporation
    Inventors: Jincheng Li, Minwoo Park
  • Patent number: 11611762
    Abstract: A video decoding method includes determining whether an ultimate motion vector expression (UMVE) mode is allowed for an upper data unit including a current block, when the UMVE mode is allowed for the upper data unit, determining whether the UMVE mode is applied to the current block, when the UMVE mode is applied to the current block, determining a base motion vector of the current block, determining a correction distance and a correction direction for correction of the base motion vector, determining a motion vector of the current block by correcting the base motion vector according to the correction distance and the correction direction, and reconstructing the current block based on the motion vector of the current block.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: March 21, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-soo Jeong, Minwoo Park
  • Patent number: 11604944
    Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
  • Publication number: 20230070926
    Abstract: Provided is a video decoding method including: determining whether or not to perform history-based motion vector prediction for inter-prediction of a current block, based on a location of the current block in a tile including a plurality of largest coding units; when it is determined to perform the history-based motion vector prediction on the current block, generating a motion information candidate list including history-based motion vector candidates; determining a motion vector of the current block by using a motion vector predictor determined from the motion information candidate list; and reconstructing the current block by using the motion vector of the current block, wherein, when a motion constraint is applied to a first tile group, when a reference picture of a first tile from among tiles included in the first tile group is a second picture, a motion vector of the first tile is not permitted to indicate a block of the second picture, the block being located outside a second tile group, and when the m
    Type: Application
    Filed: November 14, 2022
    Publication date: March 9, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Woongil CHOI, Gahyun RYU, Minsoo PARK, Minwoo PARK, Yumi SOHN, Seungsoo JEONG, Narae CHOI, Anish TAMSE
  • Publication number: 20230072296
    Abstract: Provided are a method and apparatus, which, during video encoding and decoding processes, obtain chroma intra prediction mode information about a current chroma block, when the chroma intra prediction mode information indicates a direct mode (DM), determine a luma block including a luma sample corresponding to a chroma sample at a lower-right location with respect to a center of the current chroma block, determine a chroma intra prediction mode of the current chroma block based on an intra prediction mode of the determined luma block, and perform intra prediction on the current chroma block, based on the determined chroma intra prediction mode.
    Type: Application
    Filed: November 4, 2022
    Publication date: March 9, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo PARK, Gahyun RYU, Minwoo PARK, Seungsoo JEONG, Kiho CHOI, Narae CHOI, Woongil CHOI, Anish TAMSE, Yinji PIAO
  • Publication number: 20230069175
    Abstract: Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.
    Type: Application
    Filed: November 7, 2022
    Publication date: March 2, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anish TAMSE, Minwoo PARK, Minsoo PARK
  • Patent number: 11595647
    Abstract: Provided is an image decoding method including: determining a first coding block and a second coding block corresponding to the first coding block; when a size of the first coding block is equal to or smaller than a preset size, obtaining first split shape mode information and second split shape mode information from a bitstream; determining a split mode of the first coding block, based on the first split shape mode information, and determining a split mode of the second coding block, based on the second split shape mode information; and decoding a coding block of a first color component which is determined based on the split mode of the first coding block and a coding block of a second color component which is determined based on the split mode of the second coding block.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: February 28, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo Park, Chanyul Kim, Minwoo Park, Seungsoo Jeong, Kiho Choi, Narae Choi, Woongil Choi, Anish Tamse, Yin-Ji Piao
  • Publication number: 20230054332
    Abstract: Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.
    Type: Application
    Filed: November 7, 2022
    Publication date: February 23, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anish TAMSE, Minwoo PARK, Minsoo PARK
  • Publication number: 20230059592
    Abstract: Provided are a video decoding method and apparatus for, in a video encoding and decoding procedure, when a merge candidate list of a current block is configured, determining whether the number of merge candidates included in the merge candidate list is greater than 1 and is smaller than a predetermined maximum merge candidate number, when the number of the merge candidates included in the merge candidate list is greater than 1 and is smaller than the predetermined maximum merge candidate number, determining an additional merge candidate by using a first merge candidate and a second merge candidate of the merge candidate list of the current block, configuring the merge candidate list by adding the determined additional merge candidate to the merge candidate list, and performing prediction on the current block, based on the merge candidate list.
    Type: Application
    Filed: November 7, 2022
    Publication date: February 23, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anish TAMSE, Minwoo Park, Minsoo Park
  • Publication number: 20230055604
    Abstract: Provided is an image decoding method including determining a current chroma block having a rectangular shape corresponding to a current luma block included in one of a plurality of luma blocks, determining a piece of motion information for the current chroma block and a chroma block adjacent to the current chroma block by using motion information of the current chroma block and the adjacent chroma block, and performing inter prediction on the current chroma block and the adjacent chroma block by using the piece of motion information for the current chroma block and the adjacent chroma block to generate prediction blocks of the current chroma block and the adjacent chroma block.
    Type: Application
    Filed: October 27, 2022
    Publication date: February 23, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anish Tamse, Woongil Choi, Minwoo Park, Seungsoo Jeong, Yinji Piao, Gahyun Ryu, Kiho Choi, Narae Choi, Minsoo Park
  • Publication number: 20230054879
    Abstract: Provided are a method and apparatus, which, during video encoding and decoding processes, obtain chroma intra prediction mode information about a current chroma block, when the chroma intra prediction mode information indicates a direct mode (DM), determine a luma block including a luma sample corresponding to a chroma sample at a lower-right location with respect to a center of the current chroma block, determine a chroma intra prediction mode of the current chroma block based on an intra prediction mode of the determined luma block, and perform intra prediction on the current chroma block, based on the determined chroma intra prediction mode.
    Type: Application
    Filed: November 4, 2022
    Publication date: February 23, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo PARK, Gahyun RYU, Minwoo PARK, Seungsoo JEONG, Kiho CHOI, Narae CHOI, Woongil CHOI, Anish TAMSE, Yinji PIAO
  • Publication number: 20230047368
    Abstract: Provided are a method and apparatus, which, during video encoding and decoding processes, obtain chroma intra prediction mode information about a current chroma block, when the chroma intra prediction mode information indicates a direct mode (DM), determine a luma block including a luma sample corresponding to a chroma sample at a lower-right location with respect to a center of the current chroma block, determine a chroma intra prediction mode of the current chroma block based on an intra prediction mode of the determined luma block, and perform intra prediction on the current chroma block, based on the determined chroma intra prediction mode.
    Type: Application
    Filed: November 4, 2022
    Publication date: February 16, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo PARK, Gahyun RYU, Minwoo PARK, Seungsoo JEONG, Kiho CHOI, Narae CHOI, Woongil CHOI, Anish TAMSE, Yinji PIAO
  • Publication number: 20230047715
    Abstract: Provided are a method and apparatus, which, during video encoding and decoding processes, obtain chroma intra prediction mode information about a current chroma block, when the chroma intra prediction mode information indicates a direct mode (DM), determine a luma block including a luma sample corresponding to a chroma sample at a lower-right location with respect to a center of the current chroma block, determine a chroma intra prediction mode of the current chroma block based on an intra prediction mode of the determined luma block, and perform intra prediction on the current chroma block, based on the determined chroma intra prediction mode.
    Type: Application
    Filed: November 4, 2022
    Publication date: February 16, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minsoo PARK, Gahyun RYU, Minwoo PARK, Seungsoo JEONG, Kiho CHOI, Narae CHOI, Woongil CHOI, Anish TAMSE, Yinji PIAO
  • Patent number: 11579629
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: February 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister