Patents by Inventor Raghuveer Peri

Raghuveer Peri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10547947
    Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: January 28, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
  • Patent number: 10379534
    Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: August 13, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
  • Patent number: 10073607
    Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: September 11, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng
  • Patent number: 10051364
    Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: August 14, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng
  • Patent number: 10013996
    Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: July 3, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Patent number: 9906885
    Abstract: A method for outputting virtual sound includes detecting an audio signal in an environment at one or more microphones. The method also includes determining, at a processor, a location of a sound source of the audio signal and estimating one or more acoustical characteristics of the environment based on the audio signal. The method further includes inserting a virtual sound into the environment based on the one or more acoustical characteristics. The virtual sound has one or more audio properties of a sound generated from the location of the sound source.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: February 27, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Lae-Hoon Kim, Raghuveer Peri
  • Publication number: 20180020312
    Abstract: A method for outputting virtual sound includes detecting an audio signal in an environment at one or more microphones. The method also includes determining, at a processor, a location of a sound source of the audio signal and estimating one or more acoustical characteristics of the environment based on the audio signal. The method further includes inserting a virtual sound into the environment based on the one or more acoustical characteristics. The virtual sound has one or more audio properties of a sound generated from the location of the sound source.
    Type: Application
    Filed: August 16, 2016
    Publication date: January 18, 2018
    Inventors: Erik Visser, Lae-Hoon Kim, Raghuveer Peri
  • Publication number: 20170353809
    Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.
    Type: Application
    Filed: June 1, 2016
    Publication date: December 7, 2017
    Inventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
  • Patent number: 9838815
    Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: December 5, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
  • Publication number: 20170339491
    Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
  • Publication number: 20170270406
    Abstract: A method of training a device specific cloud-based audio processor includes receiving sensor data captured from multiple sensors at a local device. The method also includes receiving spatial information labels computed on the local device using local configuration information. The spatial information labels are associated with the captured sensor data. Lower layers of a first neural network are trained based on the spatial information labels and sensor data. The trained lower layers are incorporated into a second, larger neural network for audio classification. The second, larger neural network may be retrained using the trained lower layers of the first neural network.
    Type: Application
    Filed: September 22, 2016
    Publication date: September 21, 2017
    Inventors: Erik VISSER, Minho JIN, Lae-Hoon KIM, Raghuveer PERI, Shuhua ZHANG
  • Publication number: 20170220036
    Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.
    Type: Application
    Filed: June 17, 2016
    Publication date: August 3, 2017
    Inventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
  • Patent number: 9706300
    Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: July 11, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Patent number: 9666183
    Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: May 30, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
  • Publication number: 20170085985
    Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.
    Type: Application
    Filed: September 18, 2015
    Publication date: March 23, 2017
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Publication number: 20170084286
    Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.
    Type: Application
    Filed: September 18, 2015
    Publication date: March 23, 2017
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
  • Patent number: 9578439
    Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: February 21, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
  • Publication number: 20160284346
    Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Inventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
  • Publication number: 20160198282
    Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.
    Type: Application
    Filed: July 23, 2015
    Publication date: July 7, 2016
    Inventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
  • Publication number: 20160004499
    Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
    Type: Application
    Filed: July 1, 2015
    Publication date: January 7, 2016
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng