Patents by Inventor Lae Hoon Kim

Lae Hoon Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10623845
    Abstract: Methods, systems, computer-readable media, and apparatuses for gesture control are presented. One example includes indicating, based on information from a first audio input signal, a presence of an object in proximity to a microphone, and increasing a volume level in response to the indicating.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: April 14, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Dongmei Wang, Fatemeh Saki, Erik Visser, Anne Katrin Konertz, Sharon Kaziunas, Shuhua Zhang, Cheng-Yu Hung
  • Patent number: 10547947
    Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: January 28, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
  • Patent number: 10540979
    Abstract: A device includes a memory, a receiver, a processor, and a display. The memory is configured to store a speaker model. The receiver is configured to receive an input audio signal. The processor is configured to determine a first confidence level associated with a first portion of the input audio signal based on the speaker model. The processor is also configured to determine a second confidence level associated with a second portion of the input audio signal based on the speaker model. The display is configured to present a graphical user interface associated with the first confidence level or associated with the second confidence level.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: January 21, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Erik Visser, Lae-Hoon Kim, Minho Jin, Yinyi Guo
  • Patent number: 10492015
    Abstract: A wireless device is described. The wireless device includes at least two microphones on the wireless device. The microphones are configured to capture sound from a target user. The wireless device also includes processing circuitry. The processing circuitry is coupled to the microphones. The processing circuitry is configured to locate the target user. The wireless device further includes a communication interface. The communication interface is coupled to the processing circuitry. The communication interface is configured to receive external device microphone audio from at least one external device microphone to assist the processing circuitry in the wireless device to locate the target user.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: November 26, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Lae-Hoon Kim, Pei Xiang, Erik Visser
  • Publication number: 20190355351
    Abstract: A device includes a memory configured to store a user experience evaluation unit. A processor is configured to receive a first user input corresponding to a user command to initiate a particular task, the first user input received via a first sensor. The processor is configured to, after receiving the first user input, receive one or more subsequent user inputs, the one or subsequent user inputs including a second user input received via a second sensor. The processor is configured to initiate a remedial action in response to determining, based on the user experience evaluation unit, that the one or more subsequent user inputs correspond to a negative user experience.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Lae-Hoon Kim, Yinyi Guo, Ravi Choudhary, Sunkuk Moon, Erik Visser, Fatemeh Saki
  • Publication number: 20190341026
    Abstract: A device includes a memory configured to store category labels associated with categories of a natural language processing library. A processor is configured to analyze input audio data to generate a text string and to perform natural language processing on at least the text string to generate an output text string including an action associated with a first device, a speaker, a location, or a combination thereof. The processor is configured to compare the input audio data to audio data of the categories to determine whether the input audio data matches any of the categories and, in response to determining that the input audio data does not match any of the categories: create a new category label, associate the new category label with at least a portion of the output text string, update the categories with the new category label, and generate a notification indicating the new category label.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Erik Visser, Fatemeh Saki, Yinyi Guo, Sunkuk Moon, Lae-Hoon Kim, Ravi Choudhary
  • Publication number: 20190320281
    Abstract: An apparatus includes a processor configured to receive one or more media signals associated with a scene. The processor is also configured to identify a spatial location in the scene for each source of the one or more media signals. The processor is further configured to identify audio content for each media signal of the one or more media signals. The processor is also configured to determine one or more candidate spatial locations in the scene based on the identified spatial locations. The processor is further configured to generate audio to playback as virtual sounds that originate from the one or more candidate spatial locations.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Yinyi Guo, Lae-Hoon Kim, Dongmei Wang, Erik Visser
  • Publication number: 20190311728
    Abstract: Methods, systems, and devices for auditory enhancement are described. A device may receive a respective auditory signal at each of a set of microphones, where each auditory signal includes a respective representation of a target auditory component and one or more noise artifacts. The device may identify a directionality associated with a source of the target auditory component (e.g., based on an arrangement of the multiple microphones). The device may determine a distribution function for the target auditory component based at least in part on the directionality associated with the source and on the received plurality of auditory signals. The device may generate an estimate of the target auditory component based at least in part on the distribution function and output the estimate of the target auditory component.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 10, 2019
    Inventors: Lae-Hoon Kim, Shuhua Zhang, Erik Visser
  • Patent number: 10431211
    Abstract: An apparatus includes multiple microphones to generate audio signals based on sound of a far-field acoustic environment. The apparatus also includes a signal processing system to process the audio signals to generate at least one processed audio signal. The signal processing system is configured to update one or more processing parameters while operating in a first operational mode and is configured to use a static version of the one or more processing parameters while operating in the second operational mode. The apparatus further includes a keyword detection system to perform keyword detection based on the at least one processed audio signal to determine whether the sound includes an utterance corresponding to a keyword and, based on a result of the keyword detection, to send a control signal to the signal processing system to change an operational mode of the signal processing system.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: October 1, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Asif Mohammad, Ian Ernan Liu, Ye Jiang
  • Publication number: 20190251971
    Abstract: In a particular aspect, a speech generator includes a signal input configured to receive a first audio signal. The speech generator also includes at least one speech signal processor configured to generate a second audio signal based on information associated with the first audio signal and based further on automatic speech recognition (ASR) data associated with the first audio signal.
    Type: Application
    Filed: April 26, 2019
    Publication date: August 15, 2019
    Inventors: Erik Visser, Shuhua Zhang, Lae-Hoon Kim, Yinyi Guo, Sunkuk Moon
  • Patent number: 10379534
    Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: August 13, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
  • Patent number: 10332520
    Abstract: In a particular aspect, an apparatus includes an audio sensor configured to receive an input audio signal. The apparatus also includes speech generative circuitry configured to generate a synthesized audio signal based at least partly on automatic speech recognition (ASR) data associated with the input audio signal and based on one or more parameters indicative of state information associated with the input audio signal.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: June 25, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Erik Visser, Shuhua Zhang, Lae-Hoon Kim, Yinyi Guo, Sunkuk Moon
  • Publication number: 20190139552
    Abstract: An electronic device includes a display, wherein the display is configured to present a user interface, wherein the user interface comprises a coordinate system. The coordinate system corresponds to physical coordinates. The display is configured to present a sector selection feature that allows selection of at least one sector of the coordinate system. The at least one sector corresponds to captured audio from multiple microphones. The sector selection may also include an audio signal indicator. The electronic device includes operation circuitry coupled to the display. The operation circuitry is configured to perform an audio operation on the captured audio corresponding to the audio signal indicator based on the sector selection.
    Type: Application
    Filed: September 24, 2018
    Publication date: May 9, 2019
    Inventors: Lae-Hoon KIM, Erik VISSER, Phuong Lam TON, Jeremy Patrick TOMAN, Jeffrey Clinton SHAW
  • Publication number: 20190138095
    Abstract: An apparatus includes one or more sensor units configured to detect non-audible sensor data associated with a user. The apparatus also includes a processor, including an action determination unit, coupled to the one or more sensors units. The processor is configured to generate a descriptive text-based input based on the non-audible sensor data. The processor is also configured to determine an action to be performed based on the descriptive text-based input.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 9, 2019
    Inventors: Erik Visser, Sunkuk Moon, Yinyi Guo, Lae-Hoon Kim, Shuhua Zhang
  • Publication number: 20190098070
    Abstract: Various embodiments provide systems and methods which disclose a command device which can be used to establish a wireless connection, through one or more wireless channels, between the command device and a remote device. An intention code may be generated, prior to, or after, the establishment of the wireless connection, and the remote device may be selected based on the intention code. The command device may initiate a wireless transfer, through one or more wireless channels of the established wireless connection, of an intention code, and receive acknowledgement that the intention code was successfully transferred to the remote device. The command device may then control the remote device, based on the intention code sent to the remote device, through the one or more wireless channels of the established wireless connection between the command device and the remote device.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Inventors: Lae-Hoon Kim, Erik Visser, Yinyi Guo
  • Patent number: 10157272
    Abstract: A method for evaluating strength of an audio password by an electronic device is described. The method includes obtaining an audio signal captured by one or more microphones. The audio signal includes an audio password. The method also includes evaluating the strength of the audio password based on measuring one or more unique characteristics of the audio signal. The method further includes informing a user that the audio password is weak based on the evaluation of the strength of the audio password.
    Type: Grant
    Filed: February 4, 2014
    Date of Patent: December 18, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Juhan Nam, Erik Visser
  • Publication number: 20180307753
    Abstract: An electronic device includes a classifier circuit, a ranking circuit, and a data generator circuit. The classifier circuit is configured to determine, based on first data indicating samples of sounds detected at a plurality of geographic locations, a plurality of acoustic event classifications associated with the plurality of geographic locations. The ranking circuit is configured to determine a plurality of index scores associated with the plurality of geographic locations by ranking each of the plurality of geographic locations based on the plurality of acoustic event classifications. The data generator circuit is configured to generate, based on the plurality of index scores, second data indicating a geographic map corresponding to the plurality of geographic locations. The second data further indicates the plurality of index scores and a prompt to enable a search for a particular type of acoustic event.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 25, 2018
    Inventors: Yinyi GUO, Erik Visser, Lae-Hoon Kim
  • Patent number: 10107887
    Abstract: A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes providing a sector selection feature that allows selection of at least one sector of the coordinate system. The method further includes providing a sector editing feature that allows editing the at least one sector.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 23, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Phuong Lam Ton, Jeremy Patrick Toman, Jeffrey Clinton Shaw
  • Patent number: 10073521
    Abstract: Disclosed is an application interface that takes into account the user's gaze direction relative to who is speaking in an interactive multi-participant environment where audio-based contextual information and/or visual-based semantic information is being presented. Among these various implementations, two different types of microphone array devices (MADs) may be used. The first type of MAD is a steerable microphone array (a.k.a. a steerable array) which is worn by a user in a known orientation with regard to the user's eyes, and wherein multiple users may each wear a steerable array. The second type of MAD is a fixed-location microphone array (a.k.a. a fixed array) which is placed in the same acoustic space as the users (one or more of which are using steerable arrays).
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: September 11, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Jongwon Shin, Erik Visser
  • Patent number: 10073607
    Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: September 11, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng