Patents by Inventor Raghuveer Peri
Raghuveer Peri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10547947Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.Type: GrantFiled: May 18, 2016Date of Patent: January 28, 2020Assignee: Qualcomm IncorporatedInventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
-
Patent number: 10379534Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.Type: GrantFiled: June 17, 2016Date of Patent: August 13, 2019Assignee: Qualcomm IncorporatedInventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
-
Patent number: 10073607Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.Type: GrantFiled: July 1, 2015Date of Patent: September 11, 2018Assignee: QUALCOMM IncorporatedInventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng
-
Patent number: 10051364Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.Type: GrantFiled: July 1, 2015Date of Patent: August 14, 2018Assignee: QUALCOMM IncorporatedInventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng
-
Patent number: 10013996Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.Type: GrantFiled: September 18, 2015Date of Patent: July 3, 2018Assignee: QUALCOMM IncorporatedInventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
-
Patent number: 9906885Abstract: A method for outputting virtual sound includes detecting an audio signal in an environment at one or more microphones. The method also includes determining, at a processor, a location of a sound source of the audio signal and estimating one or more acoustical characteristics of the environment based on the audio signal. The method further includes inserting a virtual sound into the environment based on the one or more acoustical characteristics. The virtual sound has one or more audio properties of a sound generated from the location of the sound source.Type: GrantFiled: August 16, 2016Date of Patent: February 27, 2018Assignee: QUALCOMM IncorporatedInventors: Erik Visser, Lae-Hoon Kim, Raghuveer Peri
-
Publication number: 20180020312Abstract: A method for outputting virtual sound includes detecting an audio signal in an environment at one or more microphones. The method also includes determining, at a processor, a location of a sound source of the audio signal and estimating one or more acoustical characteristics of the environment based on the audio signal. The method further includes inserting a virtual sound into the environment based on the one or more acoustical characteristics. The virtual sound has one or more audio properties of a sound generated from the location of the sound source.Type: ApplicationFiled: August 16, 2016Publication date: January 18, 2018Inventors: Erik Visser, Lae-Hoon Kim, Raghuveer Peri
-
Publication number: 20170353809Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.Type: ApplicationFiled: June 1, 2016Publication date: December 7, 2017Inventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
-
Patent number: 9838815Abstract: A method of operation of a device includes receiving an input signal at the device. The input signal is generated using at least one microphone. The input signal includes a first signal component having a first amount of wind turbulence noise and a second signal component having a second amount of wind turbulence noise that is greater than the first amount of wind turbulence noise. The method further includes generating, based on the input signal, an output signal at the device. The output signal includes the first signal component and a third signal component that replaces the second signal component. A first frequency response of the input signal corresponds to a second frequency response of the output signal.Type: GrantFiled: June 1, 2016Date of Patent: December 5, 2017Assignee: QUALCOMM IncorporatedInventors: Shuhua Zhang, Erik Visser, Lae-Hoon Kim, Raghuveer Peri, Yinyi Guo
-
Publication number: 20170339491Abstract: A headset device includes a first earpiece configured to receive a reference sound and to generate a first reference audio signal based on the reference sound. The headset device further includes a second earpiece configured to receive the reference sound and to generate a second reference audio signal based on the reference sound. The headset device further includes a controller coupled to the first earpiece and to the second earpiece. The controller is configured to generate a first signal and a second signal based on a phase relationship between the first reference audio signal and the second reference audio signal. The controller is further configured to output the first signal to the first earpiece and output the second signal to the second earpiece.Type: ApplicationFiled: May 18, 2016Publication date: November 23, 2017Inventors: Lae-Hoon Kim, Hyun Jin Park, Erik Visser, Raghuveer Peri
-
Publication number: 20170270406Abstract: A method of training a device specific cloud-based audio processor includes receiving sensor data captured from multiple sensors at a local device. The method also includes receiving spatial information labels computed on the local device using local configuration information. The spatial information labels are associated with the captured sensor data. Lower layers of a first neural network are trained based on the spatial information labels and sensor data. The trained lower layers are incorporated into a second, larger neural network for audio classification. The second, larger neural network may be retrained using the trained lower layers of the first neural network.Type: ApplicationFiled: September 22, 2016Publication date: September 21, 2017Inventors: Erik VISSER, Minho JIN, Lae-Hoon KIM, Raghuveer PERI, Shuhua ZHANG
-
Publication number: 20170220036Abstract: A drone system and method. Audio signals are received via one or more microphones positioned relative to a location on a drone and one or more of the audio signals are identified as of interest. Flight characteristics of the drone are then controlled based on the audio signals that are of interest.Type: ApplicationFiled: June 17, 2016Publication date: August 3, 2017Inventors: Erik Visser, Lae-Hoon Kim, Ricardo De Jesus Bernal Castillo, Shuhua Zhang, Raghuveer Peri
-
Patent number: 9706300Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.Type: GrantFiled: September 18, 2015Date of Patent: July 11, 2017Assignee: QUALCOMM IncorporatedInventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
-
Patent number: 9666183Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.Type: GrantFiled: March 27, 2015Date of Patent: May 30, 2017Assignee: QUALCOMM IncorporatedInventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
-
Publication number: 20170085985Abstract: A method of generating audio output includes displaying a graphical user interface (GUI) at a user device. The GUI represents an area having multiple regions and multiple audio capture devices are located in the area. The method also includes receiving audio data from the multiple audio capture devices. The method further includes receiving an input indicating a selected region of the multiple regions. The method also includes generating, at the user device, audio output based on audio data from a subset of the multiple audio capture devices. Each audio capture device in the subset is located in the selected region.Type: ApplicationFiled: September 18, 2015Publication date: March 23, 2017Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
-
Publication number: 20170084286Abstract: A method of performing noise reduction includes capturing a first audio signal at a first microphone of a first device. The method also includes receiving, at the first device, audio data representative of a second audio signal from a second device. The second audio signal is captured by a second microphone of the second device. The method further includes performing noise reduction on the first audio signal based at least in part on the audio data representative of the second audio signal.Type: ApplicationFiled: September 18, 2015Publication date: March 23, 2017Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri
-
Patent number: 9578439Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.Type: GrantFiled: July 23, 2015Date of Patent: February 21, 2017Assignee: QUALCOMM IncorporatedInventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
-
Publication number: 20160284346Abstract: Disclosed is a feature extraction and classification methodology wherein audio data is gathered in a target environment under varying conditions. From this collected data, corresponding features are extracted, labeled with appropriate filters (e.g., audio event descriptions), and used for training deep neural networks (DNNs) to extract underlying target audio events from unlabeled training data. Once trained, these DNNs are used to predict underlying events in noisy audio to extract therefrom features that enable the separation of the underlying audio events from the noisy components thereof.Type: ApplicationFiled: March 27, 2015Publication date: September 29, 2016Inventors: Erik Visser, Yinyi Guo, Lae-Hoon Kim, Raghuveer Peri, Shuhua Zhang
-
Publication number: 20160198282Abstract: Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.Type: ApplicationFiled: July 23, 2015Publication date: July 7, 2016Inventors: Lae-Hoon Kim, Raghuveer Peri, Erik Visser
-
Publication number: 20160004499Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.Type: ApplicationFiled: July 1, 2015Publication date: January 7, 2016Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng