Patents by Inventor Kisun You
Kisun You has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9082035Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.Type: GrantFiled: April 18, 2012Date of Patent: July 14, 2015Assignee: QUALCOMM INCORPORATEDInventors: Kyuwoong Hwang, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
-
Patent number: 8942484Abstract: A method includes receiving an indication of a set of image regions identified in image data. The method further includes, selecting image regions from the set of image regions for text extraction at least partially based on image region stability.Type: GrantFiled: March 6, 2012Date of Patent: January 27, 2015Assignee: QUALCOMM IncorporatedInventors: Hyung-Il Koo, Kisun You
-
Publication number: 20140156274Abstract: Methods and systems to translate input labels of arcs of a network, corresponding to a sequence of states of the network, to a list of output grammar elements of the arcs, corresponding to a sequence of grammar elements. The network may include a plurality of speech recognition models combined with a weighted finite state machine transducer (WFST). Traversal may include active arc traversal, and may include active arc propagation. Arcs may be processed in parallel, including arcs originating from multiple source states and directed to a common destination state. Self-loops associated with states may be modeled within outgoing arcs of the states, which may reduce synchronization operations. Tasks may be ordered with respect to cache-data locality to associate tasks with processing threads based at least in part on whether another task associated with a corresponding data object was previously assigned to the thread.Type: ApplicationFiled: June 24, 2013Publication date: June 5, 2014Inventors: Kisun You, Christopher J. Hughes, Yen-Kuang Chen
-
Patent number: 8606293Abstract: Estimating a location of a mobile device is performed by comparing environmental information, such as environmental sound, associated with the mobile device with that of other devices to determine if the environmental information is similar enough to conclude that the mobile device is in a comparable location as another device. The devices may be in comparable locations in that they are in geographically similar locations (e.g., same store, same street, same city, etc.). The devices may be in comparable locations even though they are located in geographically dissimilar locations because the environmental information of the two locations demonstrates that the devices are in the same perceived location. With knowledge that the devices are in comparable locations, and with knowledge of the location of one of the devices, certain actions, such as targeted advertising, may be taken with respect to another device that is within a comparable location.Type: GrantFiled: October 5, 2010Date of Patent: December 10, 2013Assignee: QUALCOMM IncorporatedInventors: Taesu Kim, Kisun You, Te-Won Lee
-
Publication number: 20130308506Abstract: The various aspects are directed to automatic device-to-device connection control. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives a second sound signature from a peer device, compares the first sound signature to the second sound signature, and pairs with the peer device. An aspect extracts a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends the first sound signature to a peer device, and pairs with the peer device. An aspect detects a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts a code embedded in the beacon sound signal, and pairs with a peer device.Type: ApplicationFiled: May 16, 2013Publication date: November 21, 2013Applicant: QUALCOMM IncorporatedInventors: Taesu Kim, Kisun You, Te-Won Lee
-
Publication number: 20130252597Abstract: A method for controlling an application in a mobile device is disclosed. The method includes receiving environmental information, inferring an environmental context from the environmental information, and controlling activation of the application based on a set of reference models associated with the inferred environmental context. In addition, the method may include receiving a sound input, extracting a sound feature from the sound input, transmitting the sound feature to a server configured to group a plurality of mobile devices into at least one similar context group, and receiving, from the server, information on a leader device or a non-leader device and the at least one similar context group.Type: ApplicationFiled: February 14, 2013Publication date: September 26, 2013Applicant: Qualcomm IncorporatedInventors: Minho Jin, Taesu Kim, Kisun You, Hyun-Mook Cho, Hyung-Il Koo, Duck-Hoon Kim
-
Publication number: 20130182858Abstract: A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.Type: ApplicationFiled: August 15, 2012Publication date: July 18, 2013Applicant: QUALCOMM IncorporatedInventors: Kisun You, Tsesu Kim, Kyuwoong Hwang, Minho Jin, Hyun-Mook Cho, Te-Won Lee
-
Publication number: 20130177203Abstract: A method includes tracking an object in each of a plurality of frames of video data to generate a tracking result. The method also includes performing object processing of a subset of frames of the plurality of frames selected according to a multi-frame latency of an object detector or an object recognizer. The method includes combining the tracking result with an output of the object processing to produce a combined output.Type: ApplicationFiled: August 6, 2012Publication date: July 11, 2013Applicant: QUALCOMM IncorporatedInventors: Hyung-Il Koo, Kisun You, Young-Ki Baik
-
Patent number: 8484154Abstract: Methods and systems to translate input labels of arcs of a network, corresponding to a sequence of states of the network, to a list of output grammar elements of the arcs, corresponding to a sequence of grammar elements. The network may include a plurality of speech recognition models combined with a weighted finite state machine transducer (WFST). Traversal may include active arc traversal, and may include active arc propagation. Arcs may be processed in parallel, including arcs originating from multiple source states and directed to a common destination state. Self-loops associated with states may be modeled within outgoing arcs of the states, which may reduce synchronization operations. Tasks may be ordered with respect to cache-data locality to associate tasks with processing threads based at least in part on whether another task associated with a corresponding data object was previously assigned to the thread.Type: GrantFiled: December 14, 2009Date of Patent: July 9, 2013Assignee: Intel CorporationInventors: Kisun You, Christopher J. Hughes, Yen-Kuang Chen
-
Patent number: 8483725Abstract: A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.Type: GrantFiled: November 4, 2011Date of Patent: July 9, 2013Assignee: QUALCOMM IncorporatedInventors: Taesu Kim, Kisun You, Yong-Hui Lee, Te-Won Lee
-
Publication number: 20130108115Abstract: Embodiments of the invention describe methods and apparatus for performing context-sensitive OCR. A device obtains an image using a camera coupled to the device. The device identifies a portion of the image comprising a graphical object. The device infers a context associated with the image and selects a group of graphical objects based on the context associated with the image. Improved OCR results are generated using the group of graphical objects. Input from various sensors including microphone, GPS, and camera, along with user inputs including voice, touch, and user usage patterns may be used in inferring the user context and selecting dictionaries that are most relevant to the inferred contexts.Type: ApplicationFiled: April 18, 2012Publication date: May 2, 2013Applicant: QUALCOMM IncorporatedInventors: Kyuwoong HWANG, Te-Won Lee, Duck Hoon Kim, Kisun You, Minho Jin, Taesu Kim, Hyun-Mook Cho
-
Publication number: 20130110521Abstract: A particular method includes transitioning out of a low-power state at a processor. The method also includes retrieving audio feature data from a buffer after transitioning out of the low-power state. The audio feature data indicates features of audio data received during the low-power state of the processor.Type: ApplicationFiled: May 30, 2012Publication date: May 2, 2013Applicant: QUALCOMM IncorporatedInventors: Kyu Woong Hwang, Kisun You, Minho Jin, Peter Jivan Shah, Kwokleung Chan, Taesu Kim
-
Publication number: 20130058575Abstract: A method includes receiving an indication of a set of image regions identified in image data. The method further includes, selecting image regions from the set of image regions for text extraction at least partially based on image region stability.Type: ApplicationFiled: March 6, 2012Publication date: March 7, 2013Applicant: QUALCOMM IncorporatedInventors: HYUNG-IL KOO, Kisun You
-
Publication number: 20130027757Abstract: A method of scanning an image of a document with a portable electronic device includes interactively indicating in substantially real time on a user interface of the portable electronic device, an instruction for capturing at least one portion of an image to enhance quality. The indication is in response to identifying degradation associated with the portion(s) of the image. The method also includes capturing the portion(s) of the image with the portable electronic device according to the instruction. The method further includes stitching the captured portion(s) of the image in place of a degraded portion of a reference image corresponding to the document, to create a corrected stitched image of the document.Type: ApplicationFiled: July 29, 2011Publication date: January 31, 2013Applicant: QUALCOMM INCORPORATEDInventors: Te-Won Lee, Kyuwoong Hwang, Kisun You, Taesu Kim, Hyung-Il Koo
-
Publication number: 20130011055Abstract: A method for processing a multi-channel image is disclosed. The method includes generating a plurality of grayscale images from the multi-channel image. At least one text region is identified in the plurality of grayscale images and text region information is determined from the at least one text region. The method generates text information of the multi-channel image based on the text region information. If the at least one text region includes a plurality of text regions, text region information from the plurality of text regions is merged to generate the text information. The plurality of the grayscale images is processed in parallel. In identifying the at least one text region, at least one candidate text region may be identified in the plurality of grayscale images and the at least one text region may be identified in the identified candidate text region.Type: ApplicationFiled: July 2, 2012Publication date: January 10, 2013Applicant: QUALCOMM IncorporatedInventors: Kisun You, Hyung-Il Koo, Hyun-Mook Cho
-
Publication number: 20130004076Abstract: A method for recognizing a text block in an object is disclosed. The text block includes a set of characters. A plurality of images of the object are captured and received. The object in the received images is then identified by extracting a pattern in one of the object images and comparing the extracted pattern with predetermined patterns. Further, a boundary of the object in each of the object images is detected and verified based on predetermined size information of the identified object. Text blocks in the object images are identified based on predetermined location information of the identified object. Interim sets of characters in the identified text blocks are generated based on format information of the identified object. Based on the interim sets of characters, a set of characters in the text block in the object is determined.Type: ApplicationFiled: February 7, 2012Publication date: January 3, 2013Applicant: QUALCOMM IncorporatedInventors: HYUNG-IL KOO, Kisun You, Hyun-Mook Cho
-
Publication number: 20120226497Abstract: A method for generating an anti-model of a sound class is disclosed. A plurality of candidate sound data is provided for generating the anti-model. A plurality of similarity values between the plurality of candidate sound data and a reference sound model of a sound class is determined. An anti-model of the sound class is generated based on at least one candidate sound data having the similarity value within a similarity threshold range.Type: ApplicationFiled: February 13, 2012Publication date: September 6, 2012Applicant: QUALCOMM IncorporatedInventors: Kisun You, Kyu Woong Hwang, Taesu Kim
-
Publication number: 20120224706Abstract: A method for recognizing an environmental sound in a client device in cooperation with a server is disclosed. The client device includes a client database having a plurality of sound models of environmental sounds and a plurality of labels, each of which identifies at least one sound model. The client device receives an input environmental sound and generates an input sound model based on the input environmental sound. At the client device, a similarity value is determined between the input sound model and each of the sound models to identify one or more sound models from the client database that are similar to the input sound model. A label is selected from labels associated with the identified sound models, and the selected label is associated with the input environmental sound based on a confidence level of the selected label.Type: ApplicationFiled: October 31, 2011Publication date: September 6, 2012Applicant: QUALCOMM IncorporatedInventors: Kyu Woong Hwang, Taesu Kim, Kisun You
-
Publication number: 20120142378Abstract: A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.Type: ApplicationFiled: November 4, 2011Publication date: June 7, 2012Applicant: QUALCOMMM IncorporatedInventors: Taesu Kim, Kisun You, Yong-Hui Lee, Te-Won Lee
-
Publication number: 20120142324Abstract: A method for providing information for a conference at one or more locations is disclosed. One or more mobile devices monitor one or more starting requirements of the conference and transmit input sound information to a server when the one or more starting requirements of the conference is detected. The one or more starting requirements may include a starting time of the conference, a location of the conference, and/or acoustic characteristics of a conference environment. The server generates conference information based on the input sound information from each mobile device and transmits the conference information to each mobile device. The conference information may include information on attendees, a current speaker among the attendees, an arrangement of the attendees, and/or a meeting log of attendee participation at the conference.Type: ApplicationFiled: November 4, 2011Publication date: June 7, 2012Applicant: QUALCOMM IncorporatedInventors: Taesu Kim, Kisun You, Kyu Woong Hwang, Te-Won Lee