Patents by Inventor Youngjun Yoo

Youngjun Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11025942
    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 1, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
  • Publication number: 20210158142
    Abstract: A method includes identifying, by at least one processor, multiple features of input data using a common feature extractor. The method also includes processing, by the at least one processor, at least some identified features using each of multiple pre-processing branches. Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs. In addition, the method includes processing, by the at least one processor, at least some initial outputs or at least some combined initial outputs using each of multiple post-processing branches. Each post-processing branch includes a second set of neural network layers and generates final outputs associated with a different one of the multiple data processing tasks.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo
  • Patent number: 10936057
    Abstract: A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: March 2, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Injoon Hong, Sourabh Ravindran, Youngjun Yoo, Michael O. Polley
  • Publication number: 20200394759
    Abstract: A method includes obtaining a first image of a scene using a first image sensor of an electronic device and a second image of the scene using a second image sensor of the electronic device. The method also includes generating a first feature map from the first image and a second feature map from the second image. The method further includes generating a third feature map based on the first feature map, the second feature map, and an asymmetric search window. The method additionally includes generating a depth map by restoring spatial resolution to the third feature map.
    Type: Application
    Filed: December 12, 2019
    Publication date: December 17, 2020
    Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo, George Q. Chen, Kaimo Lin, David D. Liu, Gyeongmin Choe
  • Publication number: 20200349411
    Abstract: An electronic device, method, and computer readable medium for an invertible wavelet layer for neural networks are provided. The electronic device includes a memory and at least one processor coupled to the memory. The at least one processor is configured to receive an input to a neural network, apply a wavelet transform to the input at a wavelet layer of the neural network, and generate a plurality of subbands of the input as a result of the wavelet transform.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Chenchi Luo, David Liu, Youngjun Yoo
  • Publication number: 20200326774
    Abstract: A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point.
    Type: Application
    Filed: April 9, 2019
    Publication date: October 15, 2020
    Inventors: Injoon Hong, Sourabh Ravindran, Youngjun Yoo, Michael O. Polley
  • Publication number: 20200267324
    Abstract: An electronic device, a method, and computer readable medium for operating an electronic device are disclosed. The method includes receiving data about a state of the electronic device from one or more sensors of the electronic device. The method also includes determining whether to modify a user interface button displayed on a display of the electronic device based on the received state data and parameters of a neural network. The method further includes modifying display of the user interface button on the display of the electronic device based on determining to modify. The method additionally includes providing, to the neural network, feedback data indicating whether the user interface button was triggered within a predetermined time period after modifying the user interface button.
    Type: Application
    Filed: February 19, 2019
    Publication date: August 20, 2020
    Inventors: David Liu, Youngjun Yoo
  • Patent number: 10698570
    Abstract: A method of implementing a user interface on a mobile device is described. The method comprises learning user preference and context information; storing user preferences associated with the user-centric interface on at least one of the device or a secure server; determining and updating the context information and the user preferences on at least one of the device or a secure server.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: June 30, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Ravindran, Youngjun Yoo, Michael Oliver Polley
  • Publication number: 20200186710
    Abstract: A method includes, in a first mode, positioning first and second tiltable image sensor modules of an image sensor array of an electronic device so that a first optical axis of the first tiltable image sensor module and a second optical axis of the second tiltable image sensor module are substantially perpendicular to a surface of the electronic device, and the first and second tiltable image sensor modules are within a thickness profile of the electronic device. The method also includes, in a second mode, tilting the first and second tiltable image sensor modules so that the first optical axis of the first tiltable image sensor module and the second optical axis of the second tiltable image sensor module are not perpendicular to the surface of the electronic device, and at least part of the first and second tiltable image sensor modules are no longer within the thickness profile of the electronic device.
    Type: Application
    Filed: December 5, 2019
    Publication date: June 11, 2020
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Seok-Jun Lee, Michael O. Polley
  • Patent number: 10671141
    Abstract: A method of controlling a link state of a communication port of a storage device according to the present inventive concepts includes setting the link state of the communication port to a link active state that can exchange data with a host, determining a holding time of a first standby state among link states of the communication port, changing the link state of the communication port to the first standby state, monitoring whether an exit event occurs during the holding time from the time when a transition to the first standby state occurs, and in response to an exit event not occurring during the holding time, changing the link state of the communication port to a second standby state. A recovery time from the first standby state to the link active state is shorter than a recovery time from the second standby state to the link active state.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: June 2, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ohsung Kwon, Youngjun Yoo, Hojun Shim, Kwanggu Lee
  • Patent number: 10586367
    Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: March 10, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Ravindran, Youngjun Yoo
  • Patent number: 10575067
    Abstract: One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: February 25, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sourabh Ravindran, Youngjun Yoo
  • Publication number: 20190246130
    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.
    Type: Application
    Filed: February 8, 2018
    Publication date: August 8, 2019
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
  • Publication number: 20190019320
    Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Sourabh Ravindran, Youngjun Yoo
  • Publication number: 20190011978
    Abstract: An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.
    Type: Application
    Filed: July 10, 2017
    Publication date: January 10, 2019
    Inventors: Sourabh Ravindran, Hamid R. Sheikh, Michael Polley, Youngjun Yoo
  • Publication number: 20180192160
    Abstract: One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.
    Type: Application
    Filed: March 29, 2017
    Publication date: July 5, 2018
    Inventors: Sourabh Ravindran, Youngjun Yoo
  • Publication number: 20180136796
    Abstract: A method of implementing a user interface on a mobile device is described. The method comprises learning user preference and context information; storing user preferences associated with the user-centric interface on at least one of the device or a secure server; determining and updating the context information and the user preferences on at least one of the device or a secure server.
    Type: Application
    Filed: November 14, 2016
    Publication date: May 17, 2018
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Ravindran, Youngjun Yoo, Michael Oliver Polley
  • Publication number: 20180120918
    Abstract: A method of controlling a link state of a communication port of a storage device according to the present inventive concepts includes setting the link state of the communication port to a link active state that can exchange data with a host, determining a holding time of a first standby state among link states of the communication port, changing the link state of the communication port to the first standby state, monitoring whether an exit event occurs during the holding time from the time when a transition to the first standby state occurs, and in response to an exit event not occurring during the holding time, changing the link state of the communication port to a second standby state. A recovery time from the first standby state to the link active state is shorter than a recovery time from the second standby state to the link active state.
    Type: Application
    Filed: September 21, 2017
    Publication date: May 3, 2018
    Applicant: Samsung Electronics Co .. Ltd.
    Inventors: Ohsung KWON, Youngjun YOO, Hojun SHIM, Kwanggu LEE
  • Publication number: 20170303790
    Abstract: A mobile hyperspectral camera system is described. The mobile hyperspectral camera system comprises a mobile host device comprising a processor and a display: a plurality of cameras, coupled to the processor, configured to capture images in distinct spectral bands; and a hyperspectral flash array, coupled to the processor, configured to provide illumination to the distinct spectral bands. A method of implementing a mobile hyperspectral camera system is also described.
    Type: Application
    Filed: January 4, 2017
    Publication date: October 26, 2017
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Raja Bala, Sourabh Ravindran, Hamid Rahim Sheikh, Youngjun Yoo, Michael Oliver Polley
  • Patent number: 7593580
    Abstract: A digital video acquisition system including a plurality of image processors (30A; 30B) is disclosed. A CCD imager (22) presents video image data on a bus (video_in) in the form of digital video data, arranged in a sequence of frames. A master image processor (30A) captures and encodes a first group of frames, and instructs a slave image processor (30B) to capture and encode a second group of frames presented by the CCD imager (22) before the encoding of the first group of frames is completed by the master image processor. The master image processor (30A) completes its encoding, and is then available to capture and encode another group of frames in the sequence. Video frames that are encoded by the slave image processor (30B) are transferred to the master image processor (30A), which sequences and stores the transferred encoded frames and also those frames that it encodes in a memory (36A; 38).
    Type: Grant
    Filed: July 13, 2004
    Date of Patent: September 22, 2009
    Assignee: Texas Instruments Incorporated
    Inventors: Damon Domke, Youngjun Yoo, Deependra Talla, Ching-Yu Hung