Patents by Inventor Kisun You

Kisun You has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9082035
    Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: July 14, 2015
    Assignee: QUALCOMM INCORPORATED
    Inventors: Kyuwoong Hwang, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
  • Patent number: 8942484
    Abstract: A method includes receiving an indication of a set of image regions identified in image data. The method further includes, selecting image regions from the set of image regions for text extraction at least partially based on image region stability.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: January 27, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Hyung-Il Koo, Kisun You
  • Publication number: 20140156274
    Abstract: Methods and systems to translate input labels of arcs of a network, corresponding to a sequence of states of the network, to a list of output grammar elements of the arcs, corresponding to a sequence of grammar elements. The network may include a plurality of speech recognition models combined with a weighted finite state machine transducer (WFST). Traversal may include active arc traversal, and may include active arc propagation. Arcs may be processed in parallel, including arcs originating from multiple source states and directed to a common destination state. Self-loops associated with states may be modeled within outgoing arcs of the states, which may reduce synchronization operations. Tasks may be ordered with respect to cache-data locality to associate tasks with processing threads based at least in part on whether another task associated with a corresponding data object was previously assigned to the thread.
    Type: Application
    Filed: June 24, 2013
    Publication date: June 5, 2014
    Inventors: Kisun You, Christopher J. Hughes, Yen-Kuang Chen
  • Patent number: 8606293
    Abstract: Estimating a location of a mobile device is performed by comparing environmental information, such as environmental sound, associated with the mobile device with that of other devices to determine if the environmental information is similar enough to conclude that the mobile device is in a comparable location as another device. The devices may be in comparable locations in that they are in geographically similar locations (e.g., same store, same street, same city, etc.). The devices may be in comparable locations even though they are located in geographically dissimilar locations because the environmental information of the two locations demonstrates that the devices are in the same perceived location. With knowledge that the devices are in comparable locations, and with knowledge of the location of one of the devices, certain actions, such as targeted advertising, may be taken with respect to another device that is within a comparable location.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: December 10, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Te-Won Lee
  • Publication number: 20130308506
    Abstract: The various aspects are directed to automatic device-to-device connection control. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives a second sound signature from a peer device, compares the first sound signature to the second sound signature, and pairs with the peer device. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends the first sound signature to a peer device, and pairs with the peer device. An aspect detects a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts a code embedded in the beacon sound signal, and pairs with a peer device.
    Type: Application
    Filed: May 16, 2013
    Publication date: November 21, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Te-Won Lee
  • Publication number: 20130252597
    Abstract: A method for controlling an application in a mobile device is disclosed. The method includes receiving environmental information, inferring an environmental context from the environmental information, and controlling activation of the application based on a set of reference models associated with the inferred environmental context. In addition, the method may include receiving a sound input, extracting a sound feature from the sound input, transmitting the sound feature to a server configured to group a plurality of mobile devices into at least one similar context group, and receiving, from the server, information on a leader device or a non-leader device and the at least one similar context group.
    Type: Application
    Filed: February 14, 2013
    Publication date: September 26, 2013
    Applicant: Qualcomm Incorporated
    Inventors: Minho Jin, Taesu Kim, Kisun You, Hyun-Mook Cho, Hyung-Il Koo, Duck-Hoon Kim
  • Publication number: 20130182858
    Abstract: A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.
    Type: Application
    Filed: August 15, 2012
    Publication date: July 18, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kisun You, Tsesu Kim, Kyuwoong Hwang, Minho Jin, Hyun-Mook Cho, Te-Won Lee
  • Publication number: 20130177203
    Abstract: A method includes tracking an object in each of a plurality of frames of video data to generate a tracking result. The method also includes performing object processing of a subset of frames of the plurality of frames selected according to a multi-frame latency of an object detector or an object recognizer. The method includes combining the tracking result with an output of the object processing to produce a combined output.
    Type: Application
    Filed: August 6, 2012
    Publication date: July 11, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Hyung-Il Koo, Kisun You, Young-Ki Baik
  • Patent number: 8484154
    Abstract: Methods and systems to translate input labels of arcs of a network, corresponding to a sequence of states of the network, to a list of output grammar elements of the arcs, corresponding to a sequence of grammar elements. The network may include a plurality of speech recognition models combined with a weighted finite state machine transducer (WFST). Traversal may include active arc traversal, and may include active arc propagation. Arcs may be processed in parallel, including arcs originating from multiple source states and directed to a common destination state. Self-loops associated with states may be modeled within outgoing arcs of the states, which may reduce synchronization operations. Tasks may be ordered with respect to cache-data locality to associate tasks with processing threads based at least in part on whether another task associated with a corresponding data object was previously assigned to the thread.
    Type: Grant
    Filed: December 14, 2009
    Date of Patent: July 9, 2013
    Assignee: Intel Corporation
    Inventors: Kisun You, Christopher J. Hughes, Yen-Kuang Chen
  • Patent number: 8483725
    Abstract: A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: July 9, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Yong-Hui Lee, Te-Won Lee
  • Publication number: 20130108115
    Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.
    Type: Application
    Filed: April 18, 2012
    Publication date: May 2, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kyuwoong HWANG, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
  • Publication number: 20130110521
    Abstract: A particular method includes transitioning out of a low-power state at a processor. The method also includes retrieving audio feature data from a buffer after transitioning out of the low-power state. The audio feature data indicates features of audio data received during the low-power state of the processor.
    Type: Application
    Filed: May 30, 2012
    Publication date: May 2, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kyu Woong Hwang, Kisun You, Minho Jin, Peter Jivan Shah, Kwokleung Chan, Taesu Kim
  • Publication number: 20130058575
    Abstract: A method includes receiving an indication of a set of image regions identified in image data. The method further includes, selecting image regions from the set of image regions for text extraction at least partially based on image region stability.
    Type: Application
    Filed: March 6, 2012
    Publication date: March 7, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: HYUNG-IL KOO, Kisun You
  • Publication number: 20130027757
    Abstract: A method of scanning an image of a document with a portable electronic device includes interactively indicating in substantially real time on a user interface of the portable electronic device, an instruction for capturing at least one portion of an image to enhance quality. The indication is in response to identifying degradation associated with the portion(s) of the image. The method also includes capturing the portion(s) of the image with the portable electronic device according to the instruction. The method further includes stitching the captured portion(s) of the image in place of a degraded portion of a reference image corresponding to the document, to create a corrected stitched image of the document.
    Type: Application
    Filed: July 29, 2011
    Publication date: January 31, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Te-Won Lee, Kyuwoong Hwang, Kisun You, Taesu Kim, Hyung-Il Koo
  • Publication number: 20130011055
    Abstract: A method for processing a multi-channel image is disclosed. The method includes generating a plurality of grayscale images from the multi-channel image. At least one text region is identified in the plurality of grayscale images and text region information is determined from the at least one text region. The method generates text information of the multi-channel image based on the text region information. If the at least one text region includes a plurality of text regions, text region information from the plurality of text regions is merged to generate the text information. The plurality of the grayscale images is processed in parallel. In identifying the at least one text region, at least one candidate text region may be identified in the plurality of grayscale images and the at least one text region may be identified in the identified candidate text region.
    Type: Application
    Filed: July 2, 2012
    Publication date: January 10, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: Kisun You, Hyung-Il Koo, Hyun-Mook Cho
  • Publication number: 20130004076
    Abstract: A method for recognizing a text block in an object is disclosed. The text block includes a set of characters. A plurality of images of the object are captured and received. The object in the received images is then identified by extracting a pattern in one of the object images and comparing the extracted pattern with predetermined patterns. Further, a boundary of the object in each of the object images is detected and verified based on predetermined size information of the identified object. Text blocks in the object images are identified based on predetermined location information of the identified object. Interim sets of characters in the identified text blocks are generated based on format information of the identified object. Based on the interim sets of characters, a set of characters in the text block in the object is determined.
    Type: Application
    Filed: February 7, 2012
    Publication date: January 3, 2013
    Applicant: QUALCOMM Incorporated
    Inventors: HYUNG-IL KOO, Kisun You, Hyun-Mook Cho
  • Publication number: 20120226497
    Abstract: A method for generating an anti-model of a sound class is disclosed. A plurality of candidate sound data is provided for generating the anti-model. A plurality of similarity values between the plurality of candidate sound data and a reference sound model of a sound class is determined. An anti-model of the sound class is generated based on at least one candidate sound data having the similarity value within a similarity threshold range.
    Type: Application
    Filed: February 13, 2012
    Publication date: September 6, 2012
    Applicant: QUALCOMM Incorporated
    Inventors: Kisun You, Kyu Woong Hwang, Taesu Kim
  • Publication number: 20120224706
    Abstract: A method for recognizing an environmental sound in a client device in cooperation with a server is disclosed. The client device includes a client database having a plurality of sound models of environmental sounds and a plurality of labels, each of which identifies at least one sound model. The client device receives an input environmental sound and generates an input sound model based on the input environmental sound. At the client device, a similarity value is determined between the input sound model and each of the sound models to identify one or more sound models from the client database that are similar to the input sound model. A label is selected from labels associated with the identified sound models, and the selected label is associated with the input environmental sound based on a confidence level of the selected label.
    Type: Application
    Filed: October 31, 2011
    Publication date: September 6, 2012
    Applicant: QUALCOMM Incorporated
    Inventors: Kyu Woong Hwang, Taesu Kim, Kisun You
  • Publication number: 20120142378
    Abstract: A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.
    Type: Application
    Filed: November 4, 2011
    Publication date: June 7, 2012
    Applicant: QUALCOMMM Incorporated
    Inventors: Taesu Kim, Kisun You, Yong-Hui Lee, Te-Won Lee
  • Publication number: 20120142324
    Abstract: A method for providing information for a conference at one or more locations is disclosed. One or more mobile devices monitor one or more starting requirements of the conference and transmit input sound information to a server when the one or more starting requirements of the conference is detected. The one or more starting requirements may include a starting time of the conference, a location of the conference, and/or acoustic characteristics of a conference environment. The server generates conference information based on the input sound information from each mobile device and transmits the conference information to each mobile device. The conference information may include information on attendees, a current speaker among the attendees, an arrangement of the attendees, and/or a meeting log of attendee participation at the conference.
    Type: Application
    Filed: November 4, 2011
    Publication date: June 7, 2012
    Applicant: QUALCOMM Incorporated
    Inventors: Taesu Kim, Kisun You, Kyu Woong Hwang, Te-Won Lee