Patents by Inventor Te-Won Lee

Te-Won Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9112989
    Abstract: A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.
    Type: Grant
    Filed: March 30, 2011
    Date of Patent: August 18, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Te-Won Lee, Khaled Helmi El-Maleh, Heejong Yoo, Jongwon Shin
  • Patent number: 9082035
    Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: July 14, 2015
    Assignee: QUALCOMM INCORPORATED
    Inventors: Kyuwoong Hwang, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
  • Patent number: 9053361
    Abstract: In several aspects of described embodiments, an electronic device and method use a camera to capture an image or a frame of video of an environment outside the electronic device followed by identification of blocks of regions in the image. Each block that contains a region is checked, as to whether a test for presence of a line of pixels is met. When the test is met for a block, that block is identified as pixel-line-present. Pixel-line-present blocks are used to identify blocks that are adjacent. One or more adjacent block(s) may be merged with a pixel-line-present block when one or more rules are found to be satisfied, resulting in a merged block. The merged block is then subject to the above-described test, to verify presence of a line of pixels therein, and when the test is satisfied the merged block is processed normally, e.g. classified as text or non-text.
    Type: Grant
    Filed: January 23, 2013
    Date of Patent: June 9, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Pawan Kumar Baheti, Dhananjay Ashok Gore, Hyung-Il Koo, Te-Won Lee
  • Patent number: 8942420
    Abstract: A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: January 27, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Duck Hoon Kim, Young-Ki Baik, Te-Won Lee
  • Patent number: 8907929
    Abstract: The embodiments provide systems and methods for touchless sensing and gesture recognition using continuous wave sound signals. Continuous wave sound, such as ultrasound, emitted by a transmitter may reflect from an object, and be received by one or more sound receivers. Sound signals may be temporally encoded. Received sound signals may be processed to determine a channel impulse response or calculate time of flight. Determined channel impulse responses may be processed to extract recognizable features or angles. Extracted features may be compared to a database of features to identify a user input gesture associated with the matched feature. Angles of channel impulse response curves may be associated with an input gesture. Time of flight values from each receiver may be used to determine coordinates of the reflecting object. Embodiments may be implemented as part of a graphical user interface. Embodiments may be used to determine a location of an emitter.
    Type: Grant
    Filed: September 17, 2010
    Date of Patent: December 9, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Ren Li, Te-Won Lee, Hui-ya L. Nelson, Samir K Gupta
  • Publication number: 20140324591
    Abstract: In an embodiment, two or more local wireless peer-to-peer connected user equipments (UEs) capture local ambient sound, and report information associated with the captured local ambient sound to an authentication device. The authentication device compares the reported information to determine a degree of environmental similarity for the UEs, and selectively authenticates the UEs as being in a shared environment based on the determined degree of environmental similarity. A given UE among the two or more UEs selects a target UE for performing a given action based on whether the authentication device authenticates the UEs as being in the shared environment.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 30, 2014
    Applicant: QUALCOMM Incorporated
    Inventors: Taesu KIM, Ravinder Paul CHANDHOK, Te-Won LEE
  • Patent number: 8874439
    Abstract: Signal separation techniques based on frequency dependency are described. In one implementation, a blind signal separation process is provided that avoids the permutation problem of previous signal separation processes. In the process, two or more signal sources are provided, with each signal source having recognized frequency dependencies. The process uses these inter-frequency dependencies to more robustly separate the source signals. The process receives a set of mixed signal input signals, and samples each input signal using a rolling window process. The sampled data is transformed into the frequency domain, which provides channel inputs to the inter-frequency dependent separation process. Since frequency dependencies have been defined for each source, the process is able to use the frequency dependency to more accurately separate the signals.
    Type: Grant
    Filed: March 1, 2006
    Date of Patent: October 28, 2014
    Assignee: The Regents of the University of California
    Inventors: Taesu Kim, Te-Won Lee
  • Publication number: 20140112526
    Abstract: A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.
    Type: Application
    Filed: October 18, 2012
    Publication date: April 24, 2014
    Applicant: Qualcomm Incorporated
    Inventors: Duck Hoon Kim, Young-Ki Baik, Te-Won Lee
  • Patent number: 8665338
    Abstract: Techniques are described for identifying blurred images and recognizing text. One or more images of text may be captured. A change of movement associated with each image of the one or more images may be calculated. The change of movement associated with an image of the one or more images represents a change in an amount of acceleration of the device used to capture the image while the image was being captured. A steady image may be selected from the one or more images to use for text recognition. The steady image can be selected using the variances of acceleration associated with each image of the one or more images.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: March 4, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Hyung-Il Koo, Taesu Kim, Ki-Hyun Kim, Te-Won Lee
  • Patent number: 8626498
    Abstract: A voice activity detection (VAD) system includes a first voice activity detector, a second voice activity detector and control logic. The first voice activity detector is included in a device and produces a first VAD signal. The second voice activity detector is located externally to the device and produces a second VAD signal. The control logic combines the first and second VAD signals into a VAD output signal. Voice activity may be detected based on the VAD output signal. The second VAD signal can be represented as a flag included in a packet containing digitized audio. The packet can be transmitted to the device from the externally located VAD over a wireless link.
    Type: Grant
    Filed: February 24, 2010
    Date of Patent: January 7, 2014
    Assignee: QUALCOMM Incorporated
    Inventor: Te-Won Lee
  • Patent number: 8606293
    Abstract: Estimating a location of a mobile device is performed by comparing environmental information, such as environmental sound, associated with the mobile device with that of other devices to determine if the environmental information is similar enough to conclude that the mobile device is in a comparable location as another device. The devices may be in comparable locations in that they are in geographically similar locations (e.g., same store, same street, same city, etc.). The devices may be in comparable locations even though they are located in geographically dissimilar locations because the environmental information of the two locations demonstrates that the devices are in the same perceived location. With knowledge that the devices are in comparable locations, and with knowledge of the location of one of the devices, certain actions, such as targeted advertising, may be taken with respect to another device that is within a comparable location.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: December 10, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Te-Won Lee
  • Publication number: 20130308506
    Abstract: The various aspects are directed to automatic device-to-device connection control. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives a second sound signature from a peer device, compares the first sound signature to the second sound signature, and pairs with the peer device. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends the first sound signature to a peer device, and pairs with the peer device. An aspect detects a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts a code embedded in the beacon sound signal, and pairs with a peer device.
    Type: Application
    Filed: May 16, 2013
    Publication date: November 21, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Te-Won Lee
  • Patent number: 8514295
    Abstract: This disclosure describes techniques that can improve and possibly accelerate the generation of augmented reality (AR) information with respect to objects that appear in images of a video sequence. To do so, the techniques of this disclosure capture and use information about the eyes of a user of a video device. The video device may include two different cameras. A first camera is oriented to capture a sequence of images (e.g., video) outward from a user. A second camera is oriented to capture images of the eyes of the user when the first camera captures images outward from the user. The eyes of the user, as captured by one or more images of the second camera, may be used to generate a probability map, and the probability map may be used to prioritize objects in the first image for AR processing.
    Type: Grant
    Filed: December 17, 2010
    Date of Patent: August 20, 2013
    Assignee: QUALCOMM Incorporated
    Inventor: Te-Won Lee
  • Publication number: 20130182858
    Abstract: A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.
    Type: Application
    Filed: August 15, 2012
    Publication date: July 18, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kisun You, Tsesu Kim, Kyuwoong Hwang, Minho Jin, Hyun-Mook Cho, Te-Won Lee
  • Patent number: 8483725
    Abstract: A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: July 9, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Yong-Hui Lee, Te-Won Lee
  • Publication number: 20130108115
    Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
    Type: Application
    Filed: April 18, 2012
    Publication date: May 2, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kyuwoong HWANG, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
  • Publication number: 20130079033
    Abstract: Example methods, apparatuses, or articles of manufacture are disclosed herein that may be utilized, in whole or in part, to facilitate or support one or more operations or techniques for position estimation via one or more proximate fingerprints for use in or with a mobile communication device.
    Type: Application
    Filed: September 21, 2012
    Publication date: March 28, 2013
    Inventors: Rajarshi Gupta, Nayeem Islam, Saumitra Mohan Das, Ayman Fawzy Naguib, Te-Won Lee
  • Publication number: 20130027757
    Abstract: A method of scanning an image of a document with a portable electronic device includes interactively indicating in substantially real time on a user interface of the portable electronic device, an instruction for capturing at least one portion of an image to enhance quality. The indication is in response to identifying degradation associated with the portion(s) of the image. The method also includes capturing the portion(s) of the image with the portable electronic device according to the instruction. The method further includes stitching the captured portion(s) of the image in place of a degraded portion of a reference image corresponding to the document, to create a corrected stitched image of the document.
    Type: Application
    Filed: July 29, 2011
    Publication date: January 31, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Te-Won Lee, Kyuwoong Hwang, Kisun You, Taesu Kim, Hyung-Il Koo
  • Publication number: 20120224072
    Abstract: Techniques are described for identifying blurred images and recognizing text. One or more images of text may be captured. A change of movement associated with each image of the one or more images may be calculated. The change of movement associated with an image of the one or more images represents a change in an amount of acceleration of the device used to capture the image while the image was being captured. A steady image may be selected from the one or more images to use for text recognition. The steady image can be selected using the variances of acceleration associated with each image of the one or more images.
    Type: Application
    Filed: March 3, 2011
    Publication date: September 6, 2012
    Applicant: QUALCOMM INCORPORATED
    Inventors: Hyung-II Koo, Taesu Kim, Ki-Hyun Kim, Te-Won Lee
  • Publication number: 20120224707
    Abstract: A method for identifying mobile devices in a similar sound environment is disclosed. Each of at least two mobile devices captures an input sound and extracts a sound signature from the input sound. Further, the mobile device extracts a sound feature from the input sound and determines a reliability value based on the sound feature. The reliability value may refer to a probability of a normal sound class given the sound feature. A server receives a packet including the sound signatures and reliability values from the mobile devices. A similarity value between sound signatures from a pair of the mobile devices is determined based on corresponding reliability values from the pair of mobile devices. Specifically, the sound signatures are weighted by the corresponding reliability values. The server identifies mobile devices in a similar sound environment based on the similarity values.
    Type: Application
    Filed: February 10, 2012
    Publication date: September 6, 2012
    Applicant: QUALCOMM Incorporated
    Inventors: TAESU KIM, Te-Won Lee