Patents by Inventor Jung-Suk Lee
Jung-Suk Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240121385Abstract: The present invention relates to an intra prediction method and apparatus. The image decoding method according to the present invention may comprise decoding information on intra prediction; and generating a prediction block by performing intra prediction for a current block based on the information on intra prediction. The information on intra prediction may include information on an intra prediction mode, and the intra prediction mode may include a curved intra prediction mode.Type: ApplicationFiled: October 4, 2023Publication date: April 11, 2024Inventors: Hyun Suk KO, Jin Ho LEE, Sung Chang LIM, Jung Won KANG, Ha Hyun LEE, Dong San JUN, Seung Hyun CHO, Hui Yong KIM, Jin Soo CHOI
-
Patent number: 11950502Abstract: Provided is a novel compound capable of improving the luminous efficiency, stability and life span of a device, an organic electric element using the same, and an electronic device thereof.Type: GrantFiled: April 17, 2021Date of Patent: April 2, 2024Assignee: DUK SAN NEOLUX CO., LTD.Inventors: Hyoung Keun Park, Yun Suk Lee, Ki Ho So, Jong Gwang Park, Yeon Seok Jeong, Jung Hwan Park, Sun Hee Lee, Hak Young Lee
-
Patent number: 11943994Abstract: A display device and a method of manufacturing the same are provided. The display device, comprises a first base substrate, a first barrier layer disposed on the first base substrate, a second base substrate disposed on the first barrier layer, at least one transistor disposed on the second base substrate, and an organic light emitting diode disposed on the at least one transistor, wherein the first barrier layer includes a silicon oxide, and has an adhesion force of 200 gf/inch or more to the second base substrate.Type: GrantFiled: August 17, 2021Date of Patent: March 26, 2024Assignee: Samsung Display Co., Ltd.Inventors: Chul Min Bae, Eun Jin Kwak, Jin Suk Lee, Jung Yun Jo, Ji Hye Han, Young In Hwang
-
Patent number: 11943436Abstract: The present it relates to a video encoding/decoding method and apparatus. The video decoding method according to the present invention may comprise c coding filter information on a coding unit; classifying samples in the coding unit into classes on a per block classification unit basis; and filtering the coding unit having the samples classified into the classes on a per block classification unit basis by using the filter information.Type: GrantFiled: February 21, 2022Date of Patent: March 26, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Sung Chang Lim, Jung Won Kang, Ha Hyun Lee, Dong San Jun, Hyun Suk Ko, Jin Ho Lee, Hui Yong Kim
-
Patent number: 11943475Abstract: The present invention relates to an image encoding/decoding method and apparatus. The image decoding method according to the present invention may comprise configuring an MPM list based on intra-prediction modes of neighbor blocks of a current block and a number of frequencies of the intra-prediction modes of the neighbor blocks, deriving an intra-prediction mode of the current block based on the MPM list, and performing intra-prediction for the current block based on the intra-prediction mode.Type: GrantFiled: December 12, 2022Date of Patent: March 26, 2024Assignee: Intellectual Discovery Co., Ltd.Inventors: Hyun Suk Ko, Sung Chang Lim, Jung Won Kang, Jin Ho Lee, Ha Hyun Lee, Dong San Jun, Hui Yong Kim
-
Publication number: 20240098311Abstract: The present invention relates to an image encoding/decoding method and apparatus. An image encoding method according to the present invention may comprise generating a transform block by performing at least one of transform and quantization; grouping at least one coefficient included in the transform block into at least one coefficient group (CG); scanning at least one coefficient included in the coefficient group; and encoding the at least one coefficient.Type: ApplicationFiled: November 20, 2023Publication date: March 21, 2024Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITYInventors: Sung Chang LIM, Jung Won KANG, Hyun Suk KO, Jin Ho LEE, Dong San JUN, Ha Hyun LEE, Seung Hyun CHO, Hui Yong KIM, Jin Soo CHOI, Yung Lyul LEE, Jun Woo CHOI
-
Patent number: 11936853Abstract: The present invention relates to an image encoding method and an image decoding method. The image decoding method includes partitioning a picture into a plurality of coding units, constructing a coding unit group including at least one coding unit of the plurality of coding units, obtaining coding information in units of one coding unit group, and decoding at least one coding unit of the plurality of coding units included in the coding unit group by using the obtained coding information.Type: GrantFiled: January 9, 2023Date of Patent: March 19, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Jung Won Kang, Sung Chang Lim, Hyun Suk Ko, Ha Hyun Lee, Jin Ho Lee, Dong San Jun, Seung Hyun Cho, Hui Yong Kim, Jin Soo Choi
-
Patent number: 11935236Abstract: Provided are a method and an apparatus for interlocking a lesion location between a 2D medical image and 3D tomosynthesis images including a plurality of 3D image slices.Type: GrantFiled: January 4, 2023Date of Patent: March 19, 2024Assignee: Lunit Inc.Inventors: Jung Hee Jang, Do Hyun Lee, Woo Suk Lee, Rae Yeong Lee
-
Publication number: 20240080455Abstract: An image encoding/decoding method and apparatus for performing intra prediction mode based intra prediction are provided. An image decoding method may comprise decoding an intra prediction mode of a current block, deriving at least one intra prediction mode from the decoded intra prediction mode of the current block, generating two or more intra prediction blocks using the intra prediction mode of the current block and the derived intra prediction mode, and generating an intra prediction block of the current block based on the two or more intra prediction blocks.Type: ApplicationFiled: November 14, 2023Publication date: March 7, 2024Applicant: LX SEMICON CO., LTD.Inventors: Sung Chang LIM, Hyun Suk KO, Jung Won KANG, Jin Ho Lee, Ha Hyun LEE, Dong San Jun, Hui Yong KIM
-
Publication number: 20240080454Abstract: An image encoding/decoding method and apparatus for performing intra prediction mode based intra prediction are provided. An image decoding method may comprise decoding an intra prediction mode of a current block, deriving at least one intra prediction mode from the decoded intra prediction mode of the current block, generating two or more intra prediction blocks using the intra prediction mode of the current block and the derived intra prediction mode, and generating an intra prediction block of the current block based on the two or more intra prediction blocks.Type: ApplicationFiled: November 14, 2023Publication date: March 7, 2024Applicant: LX SEMICON CO., LTD.Inventors: Sung Chang LIM, Hyun Suk KO, Jung Won KANG, Jin Ho Lee, Ha Hyun LEE, Dong San Jun, Hui Yong KIM
-
Patent number: 11924412Abstract: An image encoding/decoding method and apparatus for performing representative sample-based intra prediction are provided. An image decoding method may comprise deriving an intra prediction mode of a current block, configuring a reference sample of the current block, and performing intra prediction for the current block based on the intra prediction mode and the reference sample, wherein the intra prediction is representative sample-based prediction.Type: GrantFiled: May 17, 2022Date of Patent: March 5, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Jin Ho Lee, Jung Won Kang, Hyun Suk Ko, Sung Chang Lim, Ha Hyun Lee, Dong San Jun, Hui Yong Kim
-
Publication number: 20240073413Abstract: An image encoding/decoding method and apparatus for performing intra prediction using a plurality of reference sample lines are provided. An image decoding method may comprise configuring a plurality of reference sample lines, reconstructing an intra prediction mode of a current block, and performing intra prediction for the current block based on the intra prediction mode and the plurality of reference sample lines.Type: ApplicationFiled: October 30, 2023Publication date: February 29, 2024Inventors: Jin Ho LEE, Jung Won KANG, Hyun Suk KO, Sung Chang LIM, Ha Hyun LEE, Dong San JUN, Hui Yong KIM
-
Patent number: 11917384Abstract: Disclosed herein are systems and methods for processing speech signals in mixed reality applications. A method may include receiving an audio signal; determining, via first processors, whether the audio signal comprises a voice onset event; in accordance with a determination that the audio signal comprises the voice onset event: waking a second one or more processors; determining, via the second processors, that the audio signal comprises a predetermined trigger signal; in accordance with a determination that the audio signal comprises the predetermined trigger signal: waking third processors; performing, via the third processors, automatic speech recognition based on the audio signal; and in accordance with a determination that the audio signal does not comprise the predetermined trigger signal: forgoing waking the third processors; and in accordance with a determination that the audio signal does not comprise the voice onset event: forgoing waking the second processors.Type: GrantFiled: March 26, 2021Date of Patent: February 27, 2024Assignee: Magic Leap, Inc.Inventors: David Thomas Roach, Jean-Marc Jot, Jung-Suk Lee
-
Publication number: 20240027794Abstract: Systems and methods for regulating the speed of movement of virtual objects presented by a wearable system are described. The wearable system may present three-dimensional (3D) virtual content that moves, e.g., laterally across the user's field of view and/or in perceived depth from the user. The speed of the movement may follow the profile of an S-curve, with a gradual increase to a maximum speed, and a subsequent gradual decrease in speed until an end point of the movement is reached. The decrease in speed may be more gradual than the increase in speed. This speed curve may be utilized in the movement of virtual objections for eye-tracking calibration. The wearable system may track the position of a virtual object (an eye-tracking target) which moves with a speed following the S-curve. This speed curve allows for rapid movement of the eye-tracking target, while providing a comfortable viewing experience and high accuracy in determining the initial and final positions of the eye as it tracks the target.Type: ApplicationFiled: June 23, 2023Publication date: January 25, 2024Inventors: Yan Xu, Ikko Fushiki, Suraj Manjunath Shanbhag, Shiuli Das, Jung-Suk Lee
-
Publication number: 20230410835Abstract: In some embodiments, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred, and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.Type: ApplicationFiled: August 31, 2023Publication date: December 21, 2023Inventors: Jung-Suk Lee, Jean-Marc Jot
-
Patent number: 11790935Abstract: In some embodiments, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred, and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.Type: GrantFiled: April 6, 2022Date of Patent: October 17, 2023Assignee: Magic Leap, Inc.Inventors: Jung-Suk Lee, Jean-Marc Jot
-
Patent number: 11726349Abstract: Systems and methods for regulating the speed of movement of virtual objects presented by a wearable system are described. The wearable system may present three-dimensional (3D) virtual content that moves, e.g., laterally across the user's field of view and/or in perceived depth from the user. The speed of the movement may follow the profile of an S-curve, with a gradual increase to a maximum speed, and a subsequent gradual decrease in speed until an end point of the movement is reached. The decrease in speed may be more gradual than the increase in speed. This speed curve may be utilized in the movement of virtual objections for eye-tracking calibration. The wearable system may track the position of a virtual object (an eye-tracking target) which moves with a speed following the S-curve. This speed curve allows for rapid movement of the eye-tracking target, while providing a comfortable viewing experience and high accuracy in determining the initial and final positions of the eye as it tracks the target.Type: GrantFiled: February 11, 2021Date of Patent: August 15, 2023Assignee: Magic Leap, Inc.Inventors: Yan Xu, Ikko Fushiki, Suraj Manjunath Shanbhag, Shiuli Das, Jung-Suk Lee
-
Publication number: 20230217205Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: March 10, 2023Publication date: July 6, 2023Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
-
Patent number: 11632646Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: GrantFiled: April 12, 2022Date of Patent: April 18, 2023Assignee: Magic Leap, Inc.Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
-
Publication number: 20220240044Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: April 12, 2022Publication date: July 28, 2022Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot