Patents by Inventor Joon-Young Lee
Joon-Young Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200336802Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for automatic tagging of videos. In particular, in one or more embodiments, the disclosed systems generate a set of tagged feature vectors (e.g., tagged feature vectors based on action-rich digital videos) to utilize to generate tags for an input digital video. For instance, the disclosed systems can extract a set of frames for the input digital video and generate feature vectors from the set of frames. In some embodiments, the disclosed systems generate aggregated feature vectors from the feature vectors. Furthermore, the disclosed systems can utilize the feature vectors (or aggregated feature vectors) to identify similar tagged feature vectors from the set of tagged feature vectors. Additionally, the disclosed systems can generate a set of tags for the input digital videos by aggregating one or more tags corresponding to identified similar tagged feature vectors.Type: ApplicationFiled: April 16, 2019Publication date: October 22, 2020Inventors: Bryan Russell, Ruppesh Nalwaya, Markus Woodson, Joon-Young Lee, Hailin Jin
-
Patent number: 10810435Abstract: In implementations of segmenting objects in video sequences, user annotations designate an object in any image frame of a video sequence, without requiring user annotations for all image frames. An interaction network generates a mask for an object in an image frame annotated by a user, and is coupled both internally and externally to a propagation network that propagates the mask to other image frames of the video sequence. Feature maps are aggregated for each round of user annotations and couple the interaction network and the propagation network internally. The interaction network and the propagation network are trained jointly using synthetic annotations in a multi-round training scenario, in which weights of the interaction network and the propagation network are adjusted after multiple synthetic annotations are processed, resulting in a trained object segmentation system that can reliably generate realistic object masks.Type: GrantFiled: November 7, 2018Date of Patent: October 20, 2020Assignee: Adobe Inc.Inventors: Joon-Young Lee, Seoungwug Oh, Ning Xu
-
Publication number: 20200311901Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.Type: ApplicationFiled: April 1, 2019Publication date: October 1, 2020Inventors: Qi Sun, Li-Yi Wei, Joon-Young Lee, Jonathan Eisenmann, Jinwoong Jung, Byungmoon Kim
-
Publication number: 20200250436Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: ApplicationFiled: April 23, 2020Publication date: August 6, 2020Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Patent number: 10735237Abstract: A method and apparatus for generating a preamble symbol in an Orthogonal Frequency Division Multiplexing (OFDM) system by generating a first main body sequence in a time domain by performing an inverse fast Fourier transform (IFFT) on a preset sequence in a frequency domain, generating a first postfix by copying samples in a preset section in the first main body sequence, generating a first prefix by copying samples in at least a portion of a section remaining by excluding the preset section from the first main body sequence, and generating a plurality of symbols, based on a combination of the first main body sequence, the first prefix, and the first postfix.Type: GrantFiled: June 28, 2018Date of Patent: August 4, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Min-ho Kim, Jung-hyun Park, Nam-hyun Kim, Joon-young Lee, Jin-joo Chung, Doo-chan Hwang
-
Publication number: 20200241574Abstract: Systems and techniques are described that provide for generalizable approach policy learning and implementation for robotic object approaching. Described techniques provide fast and accurate approaching of a specified object, or type of object, in many different environments. The described techniques enable a robot to receive an identification of an object or type of object from a user, and then navigate to the desired object, without further control from the user. Moreover, the approach of the robot to the desired object is performed efficiently, e.g., with a minimum number of movements. Further, the approach techniques may be used even when the robot is placed in a new environment, such as when the same type of object must be approached in multiple settings.Type: ApplicationFiled: January 30, 2019Publication date: July 30, 2020Inventors: Zhe Lin, Xin Ye, Joon-Young Lee, Jianming Zhang
-
Patent number: 10726313Abstract: Various embodiments describe active learning methods for training temporal action localization models used to localize actions in untrimmed videos. A trainable active learning selection function is used to select unlabeled samples that can improve the temporal action localization model the most. The select unlabeled samples are then annotated and used to retrain the temporal action localization model. In some embodiment, the trainable active learning selection function includes a trainable performance prediction model that maps a video sample and a temporal action localization model to a predicted performance improvement for the temporal action localization model.Type: GrantFiled: April 19, 2018Date of Patent: July 28, 2020Assignee: Adobe Inc.Inventors: Joon-Young Lee, Hailin Jin, Fabian David Caba Heilbron
-
Patent number: 10671855Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: GrantFiled: April 10, 2018Date of Patent: June 2, 2020Assignee: Adobe Inc.Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Publication number: 20200143171Abstract: In implementations of segmenting objects in video sequences, user annotations designate an object in any image frame of a video sequence, without requiring user annotations for all image frames. An interaction network generates a mask for an object in an image frame annotated by a user, and is coupled both internally and externally to a propagation network that propagates the mask to other image frames of the video sequence. Feature maps are aggregated for each round of user annotations and couple the interaction network and the propagation network internally. The interaction network and the propagation network are trained jointly using synthetic annotations in a multi-round training scenario, in which weights of the interaction network and the propagation network are adjusted after multiple synthetic annotations are processed, resulting in a trained object segmentation system that can reliably generate realistic object masks.Type: ApplicationFiled: November 7, 2018Publication date: May 7, 2020Applicant: Adobe Inc.Inventors: Joon-Young Lee, Seoungwug Oh, Ning Xu
-
Publication number: 20200117906Abstract: Certain aspects involve using a space-time memory network to locate one or more target objects in video content for segmentation or other object classification. In one example, a video editor generates a query key map and a query value map by applying a space-time memory network to features of a query frame from video content. The video editor retrieves a memory key map and a memory value map that are computed, with the space-time memory network, from a set of memory frames from the video content. The video editor computes memory weights by applying a similarity function to the memory key map and the query key map. The video editor classifies content in the query frame as depicting the target feature using a weighted summation that includes the memory weights applied to memory locations in the memory value map.Type: ApplicationFiled: March 5, 2019Publication date: April 16, 2020Inventors: Joon-Young Lee, Ning Xu, Seoungwug Oh
-
Patent number: 10600150Abstract: The present disclosure includes methods and systems for modifying orientation of a spherical panorama digital image based on an inertial measurement device. In particular, one or more embodiments of the disclosed systems and methods correct for tilt and/or roll in a digital camera utilized to capture a spherical panorama digital images by detecting changes in orientation to an inertial measurement device and generating an enhanced spherical panorama digital image based on the detect changes. In particular, in one or more embodiments, the disclosed systems and methods modify orientation of a spherical panorama digital image in three-dimensional space based on changes in orientation to an inertial measurement device and resample pixels based on the modified orientation to generate an enhanced spherical panorama digital image.Type: GrantFiled: October 31, 2016Date of Patent: March 24, 2020Assignee: Adobe Inc.Inventors: Byungmoon Kim, Joon-Young Lee, Jinwoong Jung, Gavin Miller
-
Patent number: 10600171Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.Type: GrantFiled: March 7, 2018Date of Patent: March 24, 2020Assignee: Adobe Inc.Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Wei-Chih Hung, Joon-Young Lee
-
Patent number: 10497099Abstract: The present disclosure includes methods and systems for correcting distortions in spherical panorama digital images. In particular, one or more embodiments of the disclosed systems and methods correct for tilt and/or roll in a digital camera utilized to capture a spherical panorama digital images by determining a corrected orientation and generating an enhanced spherical panorama digital image based on the corrected orientation. In particular, in one or more embodiments, the disclosed systems and methods identify line segments in a spherical panorama digital image, map the line segments to a three-dimensional space, generate great circles based on the identified line segments, and determine a corrected orientation based on the generated great circles.Type: GrantFiled: October 3, 2018Date of Patent: December 3, 2019Assignee: Adobe Inc.Inventors: Byungmoon Kim, Joon-Young Lee, Jinwoong Jung, Gavin Miller
-
Publication number: 20190325275Abstract: Various embodiments describe active learning methods for training temporal action localization models used to localize actions in untrimmed videos. A trainable active learning selection function is used to select unlabeled samples that can improve the temporal action localization model the most. The select unlabeled samples are then annotated and used to retrain the temporal action localization model. In some embodiment, the trainable active learning selection function includes a trainable performance prediction model that maps a video sample and a temporal action localization model to a predicted performance improvement for the temporal action localization model.Type: ApplicationFiled: April 19, 2018Publication date: October 24, 2019Inventors: Joon-Young Lee, Hailin Jin, Fabian David Caba Heilbron
-
Patent number: 10454745Abstract: The present disclosure relates to a pre-5th-Generation (5G) or 5G communication system to be provided for supporting higher data rates Beyond 4th-Generation (4G) communication system such as Long Term Evolution (LTE). The present invention provides a method and a device for cancelling inter-symbol interference in a wireless communication system. A method for a base station in a wireless communication system can comprises the steps of: transmitting multiple synchronous signals through multiple antennas to a terminal; receiving information on a propagation delay difference among the multiple synchronous signals from the terminal; and determining signal transmission timing for each of the multiple antennas on the basis of the information on the propagation delay difference.Type: GrantFiled: July 1, 2016Date of Patent: October 22, 2019Assignee: Samsung Electronics Co., Ltd.Inventors: Hyunyong Lee, Jung Ju Kim, Joon-Young Lee
-
Publication number: 20190311202Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: ApplicationFiled: April 10, 2018Publication date: October 10, 2019Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Patent number: 10430661Abstract: Techniques and systems are described to generate a compact video feature representation for sequences of frames in a video. In one example, values of features are extracted from each frame of a plurality of frames of a video using machine learning, e.g., through use of a convolutional neural network. A video feature representation is generated of temporal order dynamics of the video, e.g., through use of a recurrent neural network. For example, a maximum value is maintained of each feature of the plurality of features that has been reached for the plurality of frames in the video. A timestamp is also maintained as indicative of when the maximum value is reached for each feature of the plurality of features. The video feature representation is then output as a basis to determine similarity of the video with at least one other video based on the video feature representation.Type: GrantFiled: December 20, 2016Date of Patent: October 1, 2019Assignee: Adobe Inc.Inventors: Hao Hu, Zhaowen Wang, Joon-Young Lee, Zhe Lin
-
Patent number: 10419684Abstract: As an apparatus for adjusting camera exposure, the apparatus includes: a virtual image generator generating a plurality of virtual images by changing brightness of an image photographed by a camera; a feature image generator generating a plurality of feature images respectively indicating features of the plurality of virtual images; and an exposure controller corresponding a feature value of the plurality of feature images to brightness, estimating reference brightness that corresponds to the maximum feature value, increasing camera exposure when the reference brightness is brighter than the photographed image, and decreasing the camera exposure when the reference brightness is darker than the photographed image.Type: GrantFiled: August 29, 2016Date of Patent: September 17, 2019Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: In-So Kweon, In wook Shim, Joon Young Lee
-
Publication number: 20190279346Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.Type: ApplicationFiled: March 7, 2018Publication date: September 12, 2019Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Wei-Chih Hung, Joon-Young Lee
-
Publication number: 20190130588Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: ApplicationFiled: December 21, 2018Publication date: May 2, 2019Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI