Patents by Inventor Ji Hun Cha

Ji Hun Cha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140047309
    Abstract: Provided is an apparatus and method for synchronizing content with data that may extract content feature information of the content, and control synchronization by comparing the content feature information and data feature information described in the data.
    Type: Application
    Filed: August 1, 2013
    Publication date: February 13, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Cheol KIM, Ji Hoon CHOI, Ji Hun CHA, Jin Woong KIM
  • Publication number: 20140002353
    Abstract: The invention relates to an advanced user interaction (AUI) interface method, comprising a step of determining, from between a basic pattern type and a synthetic pattern type, the pattern type corresponding to physical information inputted from an object. The synthetic pattern type is a combination of at least two basic pattern types. The basic pattern type includes a geometric pattern type, a symbolic pattern type, a touch pattern type, a hand posture pattern type, and/or a hand gesture pattern type. The synthetic pattern type may include attribute information indicating whether the synthetic pattern type is one created by the same object. Thus, an advanced user interaction interface for advanced user interaction devices such as a multi-touch device and a motion-sensing remote controller may be provided.
    Type: Application
    Filed: March 15, 2012
    Publication date: January 2, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seong Yong Lim, Ji Hun Cha, In Jae Lee, Sang Hyun Park, Young Kwon Lim
  • Publication number: 20140007175
    Abstract: The invention relates to method for estimating wireless channel status in wireless network, which is to be performed by client device connected to server for transmitting a video packet stream through a wired/wireless network, comprising: a step of estimating a bit error rate using additional information on a received video packet; and a step of estimating the channel capacity of the wireless network using the estimated bit error rate. The server receives, from the client device, feedback on the estimated channel capacity information or channel condition information of the wireless network, and adjusts the optimal video coding rate or the optimal source coding rate in a wireless network. Accordingly, the deterioration in the video quality of the video stream being received in the client device in real-time may be prevented to thereby improve the quality of service (QoS) of the video being received through a wireless network.
    Type: Application
    Filed: March 16, 2012
    Publication date: January 2, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yong Ju Cho, Ji Hun Cha
  • Publication number: 20130242043
    Abstract: An image synthesizing method according to the present invention includes generating a depth image in a current picture by searching depth information using a plurality of color images obtained at different view points, performing filtering on the depth image using a 3-dimensional (3D) joint bilateral filter, and generating a synthesized image using the plurality of color images and the filtered depth image, wherein the 3D joint bilateral filter performs filtering on the generated depth image using color image information for at least one of previous pictures, the current picture, and subsequent pictures, and the color image information includes information on a boundary of an object in the color images and color information of the color images. According to the present invention, image processing performance may be enhanced.
    Type: Application
    Filed: October 25, 2012
    Publication date: September 19, 2013
    Applicants: Gwangju Institute of Science and Technology, Electronics and Telecommunications Research Institute
    Inventors: Seung Jun YANG, Ji Hun CHA, Jin Woong KIM, Sang Beom LEE, Yo Sung HO
  • Publication number: 20120319813
    Abstract: Provided is an apparatus and method for processing a scene that may prevent overload of a scene caused by transmission of excessive information, by generating geometric information corresponding to meaningful information, as which sensed information may be construed semantically, and by only transmitting the geometric information to the scene, instead of transmitting, to the scene, all of the sensed information with respect to a real world.
    Type: Application
    Filed: January 14, 2011
    Publication date: December 20, 2012
    Applicant: Electronics and Telecommunications Research Inst.
    Inventors: Seong Yong Lim, In Jae Lee, Ji Hun Cha, Hee Kyung Lee
  • Publication number: 20120206578
    Abstract: Provided is an apparatus and method for an eye contact using composition of a front view image, the apparatus including: an image acquiring unit to acquire a multi-camera image; a preprocessing unit to preprocess the acquired multi-camera image; a depth information search unit to search for depth information of the preprocessed multi-camera image; and an image composition unit to compose the front view image using the found depth information.
    Type: Application
    Filed: February 15, 2012
    Publication date: August 16, 2012
    Inventors: Seung Jun YANG, Han Kyu LEE, Ji Hun CHA, Jin Woong KIM, Sang Beom LEE, In Yong SHIN, Yu Sung HO
  • Publication number: 20120133754
    Abstract: A remote gaze tracking apparatus and method for controlling an Internet Protocol Television (IPTV) are provided. An entire image including a facial region of a user may be acquired using a visible ray, the facial region may be detected from the acquired entire image, and a face width, a distance between eyes, and a distance between an eye and a screen may be acquired from the detected facial region. Additionally, an enlarged eye image corresponding to a face of the user may be acquired using at least one of the acquired face width, the acquired distance between the eyes, and the acquired distance between the eye and the screen, and an eye gaze of the user may be tracked using the acquired eye image.
    Type: Application
    Filed: June 16, 2011
    Publication date: May 31, 2012
    Applicants: Dongguk University Industry-Academic Cooperation Foundation, Electronics and Telecommunications Research Institute
    Inventors: Hee Kyung LEE, Han Kyu LEE, Ji Hun CHA, Jin Woong KIM, Kang Ryoung PARK, Hyeon Chang LEE, Won Oh LEE, Chul Woo CHO, Su Yeong GWON, Duc Thien LUONG
  • Publication number: 20120092440
    Abstract: A video communication apparatus in which eye contact can be made with an opposite party of a conversation is provided. The video communication apparatus includes: a monitor unit for video communication; and a camera unit for capturing an image of a user, wherein the monitor unit repeats mode switching between a video mode and a transparent mode according to a specific period, and wherein the camera unit is located behind the monitor unit and captures the image of the user by using a screen of the monitor unit. Accordingly, eye contact can be made with the opposite party when users who participate in video communication talk to each other while seeing communication images.
    Type: Application
    Filed: October 18, 2011
    Publication date: April 19, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Yong Ju CHO, Ji Hun CHA, Jin Woong KIM
  • Publication number: 20120023161
    Abstract: Disclosed herein are a system and a method for providing multimedia services capable of providing various types of multimedia contents and information sensed at multi-points to users at a high rate and in real time at the time of providing the multimedia contents and sense scene representation and sensory effects for multimedia contents corresponding to multimedia services through the multi-points according to service requests of the multimedia services that users want to receive, encode and transmit the sensed information for scene representation and sensory effects with binary representation according to the sensing, transmit device command data for the sensed scene representation and sensory effects, drive and control user devices according the device command data to provide the scene representation and the sensory effects for the multimedia contents to the users.
    Type: Application
    Filed: July 21, 2011
    Publication date: January 26, 2012
    Applicants: SK Telecom Co., Ltd., Electronics and Telecommunications Research Institute
    Inventors: Seong-Yong LIM, In-Jae LEE, Ji-Hun CHA, Young-Kwon LIM, Min-Sik PARK, Han-Kyu LEE, Jin-Woong KIM, Joong-Yun LEE
  • Publication number: 20110145697
    Abstract: A method for performing presentation with reference to structured information existing in an internal or external is defined. A scene presentation device may present a scene with reference to the structured information based on the defined method.
    Type: Application
    Filed: July 15, 2009
    Publication date: June 16, 2011
    Applicants: NET & TV INC., Electronics and Telecommunications Research Institute
    Inventors: Ji Hun Cha, In Jae Lee, Young Kwon Lim, Han Kyu Lee, Jin Woo Hong
  • Patent number: 7782233
    Abstract: Provided are a method and an apparatus for selectively encoding/decoding point sequences to maximize bit efficiency of a lightweight application scene representation (LASeR) binary stream. The point sequence encoding method includes the steps of: for each point sequence, (a) selecting one of exponential-Golomb (EG) encoding and fixed length (FL) encoding schemes; (b) when the FL encoding scheme is selected, encoding the point sequence using the FL encoding scheme to generate a binary stream; and (c) when the EG encoding scheme is selected, encoding the point sequence using the EG encoding scheme to generate a binary stream. The binary stream includes a flag indicating which encoding scheme is selected and a parameter k, with which the EG encoding can be most effectively performed, when the EG encoding scheme is selected. According to the encoding method, LASeR point sequences can be efficiently encoded and, during a decoding process, a large overhead is not incurred to a decoder (terminal).
    Type: Grant
    Filed: October 13, 2005
    Date of Patent: August 24, 2010
    Assignees: Electronics and Telecommunications Research Institute, Net & TV, Inc.
    Inventors: Ye Sun Joung, Ji Hun Cha, Won Sik Cheong, Kyu Heon Kim, Young Kwon Lim
  • Publication number: 20100002763
    Abstract: Provided are an apparatus and method for describing and processing digital items using a scene representation language. The apparatus includes a digital item method engine (DIME) unit for executing components based on component information included in the digital item; and a scene representation unit for expressing scenes of a plural number of media data included in the digital item in a form of defining spatio-temporal relations and allowing the media data to interact with each other. The digital item includes scene representation having representation information of the scene, and calling information for the digital item express unit to execute the scene representation unit in order to represent the scene based on the scene representation information at the scene representation unit.
    Type: Application
    Filed: September 21, 2007
    Publication date: January 7, 2010
    Inventors: Ye-Sun Joung, Jung-Won Kang, Won-Sik Cheong, Ji-Hun Cha, Kyung-Ae Moon, Jin-Woo Hong, Young-Kwon Lim