Patents by Inventor Youngjun Yoo
Youngjun Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11025942Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.Type: GrantFiled: February 8, 2018Date of Patent: June 1, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
-
Publication number: 20210158142Abstract: A method includes identifying, by at least one processor, multiple features of input data using a common feature extractor. The method also includes processing, by the at least one processor, at least some identified features using each of multiple pre-processing branches. Each pre-processing branch includes a first set of neural network layers and generates initial outputs associated with a different one of multiple data processing tasks. The method further includes combining, by the at least one processor, at least two initial outputs from at least two pre-processing branches to produce combined initial outputs. In addition, the method includes processing, by the at least one processor, at least some initial outputs or at least some combined initial outputs using each of multiple post-processing branches. Each post-processing branch includes a second set of neural network layers and generates final outputs associated with a different one of the multiple data processing tasks.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo
-
Patent number: 10936057Abstract: A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point.Type: GrantFiled: April 9, 2019Date of Patent: March 2, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Injoon Hong, Sourabh Ravindran, Youngjun Yoo, Michael O. Polley
-
Publication number: 20200394759Abstract: A method includes obtaining a first image of a scene using a first image sensor of an electronic device and a second image of the scene using a second image sensor of the electronic device. The method also includes generating a first feature map from the first image and a second feature map from the second image. The method further includes generating a third feature map based on the first feature map, the second feature map, and an asymmetric search window. The method additionally includes generating a depth map by restoring spatial resolution to the third feature map.Type: ApplicationFiled: December 12, 2019Publication date: December 17, 2020Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo, George Q. Chen, Kaimo Lin, David D. Liu, Gyeongmin Choe
-
Publication number: 20200349411Abstract: An electronic device, method, and computer readable medium for an invertible wavelet layer for neural networks are provided. The electronic device includes a memory and at least one processor coupled to the memory. The at least one processor is configured to receive an input to a neural network, apply a wavelet transform to the input at a wavelet layer of the neural network, and generate a plurality of subbands of the input as a result of the wavelet transform.Type: ApplicationFiled: April 30, 2019Publication date: November 5, 2020Inventors: Chenchi Luo, David Liu, Youngjun Yoo
-
Publication number: 20200326774Abstract: A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point.Type: ApplicationFiled: April 9, 2019Publication date: October 15, 2020Inventors: Injoon Hong, Sourabh Ravindran, Youngjun Yoo, Michael O. Polley
-
Publication number: 20200267324Abstract: An electronic device, a method, and computer readable medium for operating an electronic device are disclosed. The method includes receiving data about a state of the electronic device from one or more sensors of the electronic device. The method also includes determining whether to modify a user interface button displayed on a display of the electronic device based on the received state data and parameters of a neural network. The method further includes modifying display of the user interface button on the display of the electronic device based on determining to modify. The method additionally includes providing, to the neural network, feedback data indicating whether the user interface button was triggered within a predetermined time period after modifying the user interface button.Type: ApplicationFiled: February 19, 2019Publication date: August 20, 2020Inventors: David Liu, Youngjun Yoo
-
Patent number: 10698570Abstract: A method of implementing a user interface on a mobile device is described. The method comprises learning user preference and context information; storing user preferences associated with the user-centric interface on at least one of the device or a secure server; determining and updating the context information and the user preferences on at least one of the device or a secure server.Type: GrantFiled: November 14, 2016Date of Patent: June 30, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Sourabh Ravindran, Youngjun Yoo, Michael Oliver Polley
-
Publication number: 20200186710Abstract: A method includes, in a first mode, positioning first and second tiltable image sensor modules of an image sensor array of an electronic device so that a first optical axis of the first tiltable image sensor module and a second optical axis of the second tiltable image sensor module are substantially perpendicular to a surface of the electronic device, and the first and second tiltable image sensor modules are within a thickness profile of the electronic device. The method also includes, in a second mode, tilting the first and second tiltable image sensor modules so that the first optical axis of the first tiltable image sensor module and the second optical axis of the second tiltable image sensor module are not perpendicular to the surface of the electronic device, and at least part of the first and second tiltable image sensor modules are no longer within the thickness profile of the electronic device.Type: ApplicationFiled: December 5, 2019Publication date: June 11, 2020Inventors: Hamid R. Sheikh, Youngjun Yoo, Seok-Jun Lee, Michael O. Polley
-
Patent number: 10671141Abstract: A method of controlling a link state of a communication port of a storage device according to the present inventive concepts includes setting the link state of the communication port to a link active state that can exchange data with a host, determining a holding time of a first standby state among link states of the communication port, changing the link state of the communication port to the first standby state, monitoring whether an exit event occurs during the holding time from the time when a transition to the first standby state occurs, and in response to an exit event not occurring during the holding time, changing the link state of the communication port to a second standby state. A recovery time from the first standby state to the link active state is shorter than a recovery time from the second standby state to the link active state.Type: GrantFiled: September 21, 2017Date of Patent: June 2, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Ohsung Kwon, Youngjun Yoo, Hojun Shim, Kwanggu Lee
-
Patent number: 10586367Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.Type: GrantFiled: July 14, 2017Date of Patent: March 10, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Sourabh Ravindran, Youngjun Yoo
-
Patent number: 10575067Abstract: One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.Type: GrantFiled: March 29, 2017Date of Patent: February 25, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sourabh Ravindran, Youngjun Yoo
-
Publication number: 20190246130Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.Type: ApplicationFiled: February 8, 2018Publication date: August 8, 2019Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
-
Publication number: 20190019320Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.Type: ApplicationFiled: July 14, 2017Publication date: January 17, 2019Inventors: Sourabh Ravindran, Youngjun Yoo
-
Publication number: 20190011978Abstract: An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.Type: ApplicationFiled: July 10, 2017Publication date: January 10, 2019Inventors: Sourabh Ravindran, Hamid R. Sheikh, Michael Polley, Youngjun Yoo
-
Publication number: 20180192160Abstract: One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.Type: ApplicationFiled: March 29, 2017Publication date: July 5, 2018Inventors: Sourabh Ravindran, Youngjun Yoo
-
Publication number: 20180136796Abstract: A method of implementing a user interface on a mobile device is described. The method comprises learning user preference and context information; storing user preferences associated with the user-centric interface on at least one of the device or a secure server; determining and updating the context information and the user preferences on at least one of the device or a secure server.Type: ApplicationFiled: November 14, 2016Publication date: May 17, 2018Applicant: Samsung Electronics Co., Ltd.Inventors: Sourabh Ravindran, Youngjun Yoo, Michael Oliver Polley
-
Publication number: 20180120918Abstract: A method of controlling a link state of a communication port of a storage device according to the present inventive concepts includes setting the link state of the communication port to a link active state that can exchange data with a host, determining a holding time of a first standby state among link states of the communication port, changing the link state of the communication port to the first standby state, monitoring whether an exit event occurs during the holding time from the time when a transition to the first standby state occurs, and in response to an exit event not occurring during the holding time, changing the link state of the communication port to a second standby state. A recovery time from the first standby state to the link active state is shorter than a recovery time from the second standby state to the link active state.Type: ApplicationFiled: September 21, 2017Publication date: May 3, 2018Applicant: Samsung Electronics Co .. Ltd.Inventors: Ohsung KWON, Youngjun YOO, Hojun SHIM, Kwanggu LEE
-
Publication number: 20170303790Abstract: A mobile hyperspectral camera system is described. The mobile hyperspectral camera system comprises a mobile host device comprising a processor and a display: a plurality of cameras, coupled to the processor, configured to capture images in distinct spectral bands; and a hyperspectral flash array, coupled to the processor, configured to provide illumination to the distinct spectral bands. A method of implementing a mobile hyperspectral camera system is also described.Type: ApplicationFiled: January 4, 2017Publication date: October 26, 2017Applicant: Samsung Electronics Co., Ltd.Inventors: Raja Bala, Sourabh Ravindran, Hamid Rahim Sheikh, Youngjun Yoo, Michael Oliver Polley
-
Patent number: 7593580Abstract: A digital video acquisition system including a plurality of image processors (30A; 30B) is disclosed. A CCD imager (22) presents video image data on a bus (video_in) in the form of digital video data, arranged in a sequence of frames. A master image processor (30A) captures and encodes a first group of frames, and instructs a slave image processor (30B) to capture and encode a second group of frames presented by the CCD imager (22) before the encoding of the first group of frames is completed by the master image processor. The master image processor (30A) completes its encoding, and is then available to capture and encode another group of frames in the sequence. Video frames that are encoded by the slave image processor (30B) are transferred to the master image processor (30A), which sequences and stores the transferred encoded frames and also those frames that it encodes in a memory (36A; 38).Type: GrantFiled: July 13, 2004Date of Patent: September 22, 2009Assignee: Texas Instruments IncorporatedInventors: Damon Domke, Youngjun Yoo, Deependra Talla, Ching-Yu Hung