Patents by Inventor Lae Hoon Kim

Lae Hoon Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9858403
    Abstract: A device includes a memory and a processor. The memory is configured to store a threshold. The processor is configured to authenticate a user based on authentication data. The processor is also configured to, in response to determining that the user is authenticated, generate a correlation score indicating a correlation between a first signal received from a first sensor and a second signal received from a second sensor. The processor is also configured to determine liveness of the user based on a comparison of the correlation score and the threshold.
    Type: Grant
    Filed: February 2, 2016
    Date of Patent: January 2, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Yinyi Guo, Minho Jin, JunCheol Cho, Yongwoo Cho, Lae-Hoon Kim, Erik Visser, Shuhua Zhang
  • Patent number: 9857451
    Abstract: A method for mapping a source location by an electronic device is described. The method includes obtaining sensor data. The method also includes mapping a source location to electronic device coordinates based on the sensor data. The method further includes mapping the source location from electronic device coordinates to physical coordinates. The method additionally includes performing an operation based on a mapping.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: January 2, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Phuong Lam Ton, Jeremy Patrick Toman, Jeffrey Clinton Shaw
  • Publication number: 20170353809
    Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.
    Type: Application
    Filed: June 1, 2016
    Publication date: December 7, 2017
    Inventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
  • Patent number: 9838815
    Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: December 5, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
  • Publication number: 20170339491
    Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
  • Publication number: 20170308164
    Abstract: Disclosed is an application interface that takes into account the user's gaze direction relative to who is speaking in an interactive multi-participant environment where audio-based contextual information and/or visual-based semantic information is being presented. Among these various implementations, two different types of microphone array devices (MADs) may be used. The first type of MAD is a steerable microphone array (a.k.a. a steerable array) which is worn by a user in a known orientation with regard to the user's eyes, and wherein multiple users may each wear a steerable array. The second type of MAD is a fixed-location microphone array (a.k.a. a fixed array) which is placed in the same acoustic space as the users (one or more of which are using steerable arrays).
    Type: Application
    Filed: July 10, 2017
    Publication date: October 26, 2017
    Inventors: Lae-Hoon KIM, Jongwon Shin, Erik Visser
  • Publication number: 20170278519
    Abstract: An apparatus for detecting a sound in an acoustical environment includes a microphone array configured to detect an audio signal in the acoustical environment. The apparatus also includes a processor configured to determine an angular location of a sound source of the audio signal. The angular location is relative to the microphone array. The processor is also configured to determine at least one reverberation characteristic of the audio signal. The processor is further configured to determine a distance, relative to the microphone array, of the sound source along an axis associated with the angular location based on the at least one reverberation characteristic.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 28, 2017
    Inventors: Erik Visser, Wenliang Lu, Lae-Hoon Kim, Yinyi Guo, Shuhua Zhang
  • Publication number: 20170270406
    Abstract: A method of training a device specific cloud-based audio processor includes receiving sensor data captured from multiple sensors at a local device. The method also includes receiving spatial information labels computed on the local device using local configuration information. The spatial information labels are associated with the captured sensor data. Lower layers of a first neural network are trained based on the spatial information labels and sensor data. The trained lower layers are incorporated into a second, larger neural network for audio classification. The second, larger neural network may be retrained using the trained lower layers of the first neural network.
    Type: Application
    Filed: September 22, 2016
    Publication date: September 21, 2017
    Inventors: Erik VISSER, Minho JIN, Lae-Hoon KIM, Raghuveer PERI, Shuhua ZHANG
  • Patent number: 9769587
    Abstract: A multi-channel sound (MCS) system features intelligent calibration (e.g., of acoustic echo cancellation (AEC)) for use in dynamic acoustic environments. A sensor subsystem is utilized to detect and identify changes in the acoustic environment and determine a “scene” corresponding to the resulting acoustic characteristics for that environment. This detected scene is compared to predetermined scenes corresponding to the acoustic environment. Each predetermined scene has a corresponding pre-tuned filter configuration for optimal AEC performance. Based on the results of the comparison, the pre-tuned filter configuration corresponding to the predetermined scene that most closely matches the detected scene is utilized by the AEC subsystem of the multi-channel sound system.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: September 19, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Andre Gustavo Schevciw, Babak Forutanpour, Asif Iqbal Mohammad, Lae-Hoon Kim
  • Patent number: 9746916
    Abstract: Disclosed is an application interface that takes into account the user's gaze direction relative to who is speaking in an interactive multi-participant environment where audio-based contextual information and/or visual-based semantic information is being presented. Among these various implementations, two different types of microphone array devices (MADs) may be used. The first type of MAD is a steerable microphone array (a.k.a. a steerable array) which is worn by a user in a known orientation with regard to the user's eyes, and wherein multiple users may each wear a steerable array. The second type of MAD is a fixed-location microphone array (a.k.a. a fixed array) which is placed in the same acoustic space as the users (one or more of which are using steerable arrays).
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: August 29, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Jongwon Shin, Erik Visser
  • Patent number: 9743213
    Abstract: A multichannel acoustic system (MAS) comprises an arrangement of microphones, loudspeakers, and filters along with a multichannel acoustic processor (MAP) and other components to together provide and enhance the auditory experience of persons in a shared acoustic space such as, for example, the driver and other passengers in an automobile. Driver-specific features such as navigation and auditory feedback cues are described, as individual auditory customizations and collective communications both within the shared acoustic space as well as with other individuals not located in the space via enhanced conference call facilities.
    Type: Grant
    Filed: July 24, 2015
    Date of Patent: August 22, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Asif Iqbal Mohammad, Erik Visser, Lae-Hoon Kim, Shaun William Van Dyken, Troy Schultz, Samir Kumar Gupta
  • Patent number: 9736604
    Abstract: A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs.
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: August 15, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Jongwon Shin, Erik Visser
  • Publication number: 20170220036
    Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.
    Type: Application
    Filed: June 17, 2016
    Publication date: August 3, 2017
    Inventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
  • Publication number: 20170220786
    Abstract: A device includes a memory and a processor. The memory is configured to store a threshold. The processor is configured to authenticate a user based on authentication data. The processor is also configured to, in response to determining that the user is authenticated, generate a correlation score indicating a correlation between a first signal received from a first sensor and a second signal received from a second sensor. The processor is also configured to determine liveness of the user based on a comparison of the correlation score and the threshold.
    Type: Application
    Filed: February 2, 2016
    Publication date: August 3, 2017
    Inventors: Yinyi Guo, Minho Jin, JunCheol Cho, Yongwoo Cho, Lae-Hoon Kim, Erik Visser, Shuhua Zhang
  • Patent number: 9706300
    Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: July 11, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Patent number: 9674184
    Abstract: A method of selectively authorizing access includes obtaining, at an authentication device, first information corresponding to first synthetic biometric data. The method also includes obtaining, at the authentication device, first common synthetic data and second biometric data. The method further includes generating, at the authentication device, second common synthetic data based on the first information and the second biometric data. The method also includes selectively authorizing, by the authentication device, access based on a comparison of the first common synthetic data and the second common synthetic data.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: June 6, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Juhan Nam, Erik Visser
  • Patent number: 9672805
    Abstract: A crosstalk cancelation technique reduces feedback in a shared acoustic space by canceling out some or all parts of sound signals that would otherwise be produced by a loudspeaker to only be captured by a microphone that, recursively, would cause these sounds signals to be reproduced again on the loudspeaker as feedback. Crosstalk cancelation can be used in a multichannel acoustic system (MAS) comprising an arrangement of microphones, loudspeakers, and a processor to together enhance conversational speech between in a shared acoustic space. To achieve crosstalk cancelation, a processor analyzes the inputs of each microphone, compares it to the output of far loudspeaker(s) relative to each such microphone, and cancels out any portion of a sound signal received by the microphone that matches signals that were just produced by the far loudspeaker(s) and sending only the remaining sound signal (if any) to such far loudspeakers.
    Type: Grant
    Filed: July 24, 2015
    Date of Patent: June 6, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Asif Iqbal Mohammad, Erik Visser
  • Patent number: 9666183
    Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: May 30, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
  • Publication number: 20170084286
    Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.
    Type: Application
    Filed: September 18, 2015
    Publication date: March 23, 2017
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Publication number: 20170085985
    Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.
    Type: Application
    Filed: September 18, 2015
    Publication date: March 23, 2017
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri