Patents by Inventor Lae Hoon Kim

Lae Hoon Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9578439
    Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: February 21, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
  • Patent number: 9552840
    Abstract: A method for audio signal processing is described. The method includes decomposing a recorded auditory scene into a first category of localizable sources and a second category of ambient sound. The method also includes recording an indication of the directions of each of the localizable sources. The method may be performed with a device having a microphone array.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: January 24, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Pei Xiang, Ian Ernan Liu, Dinesh Ramakrishnan
  • Patent number: 9536537
    Abstract: A method for speech restoration by an electronic device is described. The method includes obtaining a noisy speech signal. The method also includes suppressing noise in the noisy speech signal to produce a noise-suppressed speech signal. The noise-suppressed speech signal has a bandwidth that includes at least three subbands. The method further includes iteratively restoring each of the at least three subbands. Each of the at least three subbands is restored based on all previously restored subbands of the at least three subbands.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: January 3, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Yinyi Guo, Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Sanghyun Chi
  • Patent number: 9532140
    Abstract: One example device includes a camera; a display device; a memory; and a processor in communication with the memory to receive audio signals from two or more microphones or a far-end device; receive first location information and second location information, the first location information for a visual identification of an audio source of the received audio signals and the second location information identifying a direction of arrival from the audio source; receive a first adjustment to a first portion of a UI to change either a visual identification or a coordinate direction of a direction focus; in response to the first adjustment, automatically perform a second adjustment to a second portion of the UI to change the other of the visual identification or the coordinate direction of the direction focus; and process the audio signals to filter sounds outside the direction focus, or emphasize sounds within the direction focus.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: December 27, 2016
    Assignee: QUALCOMM INCORPORATED
    Inventors: Lae-Hoon Kim, Phuong Lam Ton, Erik Visser, Jeremy P. Toman, Francis Bernard MacDougall
  • Patent number: 9495591
    Abstract: Methods, systems and articles of manufacture for recognizing and locating one or more objects in a scene are disclosed. An image and/or video of the scene are captured. Using audio recorded at the scene, an object search of the captured scene is narrowed down. For example, the direction of arrival (DOA) of a sound can be determined and used to limit the search area in a captured image/video. In another example, keypoint signatures may be selected based on types of sounds identified in the recorded audio. A keypoint signature corresponds to a particular object that the system is configured to recognize. Objects in the scene may then be recognized using a shift invariant feature transform (SIFT) analysis comparing keypoints identified in the captured scene to the selected keypoint signatures.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: November 15, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Haiyin Wang, Hasib A. Siddiqui, Lae-Hoon Kim
  • Patent number: 9497544
    Abstract: A method for echo reduction by an electronic device is described. The method includes nulling at least one speaker. The method also includes mixing a set of runtime audio signals based on a set of acoustic paths to determine a reference signal. The method also includes receiving at least one composite audio signal that is based on the set of runtime audio signals. The method further includes reducing echo in the at least one composite audio signal based on the reference signal.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: November 15, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Asif I. Mohammad, Lae-Hoon Kim, Erik Visser
  • Publication number: 20160309279
    Abstract: A wireless device is described. The wireless device includes at least two microphones on the wireless device. The microphones are configured to capture sound from a target user. The wireless device also includes processing circuitry. The processing circuitry is coupled to the microphones. The processing circuitry is configured to locate the target user. The wireless device further includes a communication interface. The communication interface is coupled to the processing circuitry. The communication interface is configured to receive external device microphone audio from at least one external device microphone to assist the processing circuitry in the wireless device to locate the target user.
    Type: Application
    Filed: June 28, 2016
    Publication date: October 20, 2016
    Inventors: Lae-Hoon Kim, Pei Xiang, Erik Visser
  • Publication number: 20160309275
    Abstract: A multi-channel sound (MCS) system features intelligent calibration (e.g., of acoustic echo cancelation (AEC)) for use in dynamic acoustic environments. A sensor subsystem is utilized to detect and identify changes in the acoustic environment and determine a “scene” corresponding to the resulting acoustic characteristics for that environment. This detected scene is compared to predetermined scenes corresponding to the acoustic environment. Each predetermined scene has a corresponding pre-tuned filter configuration for optimal AEC performance. Based on the results of the comparison, the pre-tuned filter configuration corresponding to the predetermined scene that most closely matches the detected scene is utilized by the AEC subsystem of the multi-channel sound system.
    Type: Application
    Filed: April 17, 2015
    Publication date: October 20, 2016
    Inventors: Andre Gustavo Schevciw, Babak Forutanpour, Asif Iqbal Mohammad, Lae-Hoon Kim
  • Publication number: 20160284346
    Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Inventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
  • Publication number: 20160254007
    Abstract: A method for speech restoration by an electronic device is described. The method includes obtaining a noisy speech signal. The method also includes suppressing noise in the noisy speech signal to produce a noise-suppressed speech signal. The noise-suppressed speech signal has a bandwidth that includes at least three subbands. The method further includes iteratively restoring each of the at least three subbands. Each of the at least three subbands is restored based on all previously restored subbands of the at least three subbands.
    Type: Application
    Filed: February 27, 2015
    Publication date: September 1, 2016
    Inventors: Yinyi Guo, Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Sanghyun Chi
  • Patent number: 9430628
    Abstract: A method of selectively authorizing access includes obtaining, at an authentication device, first information corresponding to first synthetic biometric data. The method also includes obtaining, at the authentication device, first common synthetic data and second biometric data. The method further includes generating, at the authentication device, second common synthetic data based on the first information and the second biometric data. The method also includes selectively authorizing, by the authentication device, access based on a comparison of the first common synthetic data and the second common synthetic data.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: August 30, 2016
    Assignee: Qualcomm Incorporated
    Inventors: Lae-Hoon Kim, Juhan Nam, Erik Visser
  • Patent number: 9408011
    Abstract: A wireless device is provided that makes use of other nearby audio transducer devices to generate a surround sound effect for a targeted user. To do this, the wireless device first ascertains whether there are any nearby external microphones and/or loudspeaker devices. An internal microphone for the wireless device and any other nearby external microphones may be used to ascertain a location of the desired/targeted user as well as the nearby loudspeaker devices. This information is then used to generate a surround sound effect for the desired/targeted user by having the wireless device steer audio signals to its internal loudspeakers and/or the nearby external loudspeaker devices.
    Type: Grant
    Filed: May 21, 2012
    Date of Patent: August 2, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Pei Xiang, Erik Visser
  • Publication number: 20160198282
    Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.
    Type: Application
    Filed: July 23, 2015
    Publication date: July 7, 2016
    Inventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
  • Publication number: 20160171964
    Abstract: A crosstalk cancellation technique reduces feedback in a shared acoustic space by canceling out some or all parts of sound signals that would otherwise be produced by a loudspeaker to only be captured by a microphone that, recursively, would cause these sounds signals to be reproduced again on the loudspeaker as feedback. Crosstalk cancellation can be used in a multichannel acoustic system (MAS) comprising an arrangement of microphones, loudspeakers, and a processor to together enhance conversational speech between in a shared acoustic space. To achieve crosstalk cancellation, a processor analyzes the inputs of each microphone, compares it to the output of far loudspeaker(s) relative to each such microphone, and cancels out any portion of a sound signal received by the microphone that matches signals that were just produced by the far loudspeaker(s) and sending only the remaining sound signal (if any) to such far loudspeakers.
    Type: Application
    Filed: July 24, 2015
    Publication date: June 16, 2016
    Inventors: Lae-Hoon Kim, Asif Iqbal Mohammad, Erik Visser
  • Publication number: 20160171989
    Abstract: A multichannel acoustic system (MAS) comprises an arrangement of microphones and loudspeakers and a multichannel acoustic processor (MAP) to together enhance conversational speech between two or more persons in a shared acoustic space such as an automobile. The enhancements are achieved by receiving sound signals substantially originating from relatively near sound sources; filtering the sound signals to cancel at least one echo signal detected for at least one microphone from among the plurality of microphones; filtering the sound signals received by the plurality of microphones to cancel at least one feedback signal detected for at least one microphone from among the plurality of microphones; and reproducing the filtered sound signals for each microphone from among the plurality of microphones on a subset of loudspeakers corresponding that are relatively far from the source microphone.
    Type: Application
    Filed: July 24, 2015
    Publication date: June 16, 2016
    Inventors: Samir K. Gupta, Asif Iqbal Mohammad, Erik Visser, Lae-Hoon Kim, Shaun William Van Dyken
  • Publication number: 20160171806
    Abstract: A system includes a memory configured to store data associated with a service that is available. The system also includes a microphone associated with an acoustic space and configured to receive an audio input produced by a person. The system further includes a sensor located within the acoustic space and configured to detect vibrations produced by the person. The system includes a processor coupled to the memory, to the microphone, and to the sensor. The processor is configured to conditionally authorize execution of the service requested by the person, the service conditionally authorized based on the audio input and the vibrations.
    Type: Application
    Filed: December 11, 2015
    Publication date: June 16, 2016
    Inventors: Shaun William Van Dyken, Erik Visser, Asif Iqbal Mohammad, Samir Kumar Gupta, Lae-Hoon Kim, Sreekanth Narayanaswamy, Phuong Ton
  • Publication number: 20160174010
    Abstract: A multichannel acoustic system (MAS) comprises an arrangement of microphones, loudspeakers, and filters along with a multichannel acoustic processor (MAP) and other components to together provide and enhance the auditory experience of persons in a shared acoustic space such as, for example, the driver and other passengers in an automobile. Driver-specific features such as navigation and auditory feedback cues are described, as individual auditory customizations and collective communications both within the shared acoustic space as well as with other individuals not located in the space via enhanced conference call facilities.
    Type: Application
    Filed: July 24, 2015
    Publication date: June 16, 2016
    Inventors: Asif Iqbal Mohammad, Erik Visser, Lae-Hoon Kim, Shaun William Van Dyken, Troy Schultz, Samir K. Gupta
  • Patent number: 9360546
    Abstract: Systems, methods, and apparatus for projecting an estimated direction of arrival of sound onto a plane that does not include the estimated direction are described.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: June 7, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser
  • Patent number: 9361898
    Abstract: A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: June 7, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Lae-Hoon Kim, Pei Xiang
  • Publication number: 20160157013
    Abstract: Systems, devices, and methods are described for recognizing and focusing on at least one source of an audio communication as part of a communication including a video image and an audio communication derived from two or more microphones when a relative position between the microphones is known. In certain embodiments, linked audio and video focus areas providing location information for one or more sound sources may each be associated with different user inputs, and an input to adjust a focus in either the audio or video domain may automatically adjust the focus in the another domain.
    Type: Application
    Filed: February 3, 2016
    Publication date: June 2, 2016
    Inventors: Lae-Hoon Kim, Phuong Lam Ton, Erik Visser, Jeremy P. Toman, Francis Bernard MacDougall