Patents by Inventor Ravish Mehra

Ravish Mehra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10959037
    Abstract: Embodiments relate to a headset that filters sounds according to a direction of a gaze of a user wearing the headset. The user wears the headset including an eye tracking unit and one or more microphones. The eye tracking unit tracks an orientation of an eye of the user to determine the direction of the gaze of the user. The direction of the gaze may be different from a facing direction of the headset. According to the determined direction of the gaze of the user, input sound signals generated by the microphones can be beamformed to amplify or emphasize sound originating from the direction of the gaze.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventor: Ravish Mehra
  • Patent number: 10939204
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, wherein the first audio spectrum includes a set of time-frequency bins, and selecting a first time-frequency bin from the set based on a first local space-domain distance (LSDD) computed for the first time-frequency bin.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: March 2, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10917735
    Abstract: Embodiments relate to obtaining head-related transfer function (HRTF) through performing simulation using images of a user's head. The geometry of the user's head is determined based in part on one or more images of the user's head. The simulation of sound propagation from an audio source to the user's head is performed based on the generated geometry. The geometry may be represented in a three-dimensional meshes or principal component analysis (PCA)-based where the user's head is represented as a combination of representative three-dimensional shapes of test subjects' heads.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: February 9, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Antonio John Miller, Ravish Mehra
  • Publication number: 20200396558
    Abstract: An audio system captures audio data of test sounds through a microphone of a headset worn by a user. The test sounds are played by an external speaker, and the audio data includes audio data captured for different orientations of the headset with respect to the external speaker. A set of head-related transfer function (HRTFs) is calculated based at least in part on the audio data of the test sounds at the different orientations of the headset. A portion of the set of HRTFs is discarded to create an intermediate set of HRTFs. The discarded portion corresponding to one or more distortion regions that are based in part on wearing the headset. One or more HRTFs are generated that correspond to the discarded portion using at least some of the intermediate set of HRTFs to create an individualized set of HRTFs for the user.
    Type: Application
    Filed: August 28, 2020
    Publication date: December 17, 2020
    Inventors: David Lou Alon, Maria Cuevas Rodriguez, Ravish Mehra, Philip Robinson
  • Publication number: 20200389716
    Abstract: An audio system for providing content to a user. The system includes a first and a second transducer assembly of a plurality of transducer assemblies, an acoustic sensor, and a controller. The first transducer assembly couples to a portion of an auricle of the user's ear and vibrates over a first range of frequencies based on a first set of audio instructions. The vibration causes the portion of the ear to create a first range of acoustic pressure waves. The second transducer assembly is configured to vibrate over a second range of frequencies to produce a second range of acoustic pressure waves based on a second set of audio instructions. The acoustic sensor detects acoustic pressure waves at an entrance of the ear. The controller generates the audio instructions based on audio content to be provided to the user and the detected acoustic pressure waves from the acoustic sensor.
    Type: Application
    Filed: July 17, 2020
    Publication date: December 10, 2020
    Inventors: Ravish Mehra, Antonio John Miller, Morteza Khaleghimeybodi
  • Patent number: 10812890
    Abstract: An audio system includes a transducer assembly, an audio sensor, and a controller. The transducer assembly is coupled to a back of an auricle of an ear of the user. The transducer assembly vibrates the auricle over a frequency range to cause the auricle to create an acoustic pressure wave in accordance with vibration instructions. The acoustic sensor detects the acoustic pressure wave at an entrance of the ear of the user. The controller dynamically adjusts a frequency response model based in part on the detected acoustic pressure wave, updates the vibration instructions using the adjusted frequency response model, and provides the updated vibration instructions to the transducer assembly.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: October 20, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Antonio John Miller, Ravish Mehra
  • Publication number: 20200327877
    Abstract: An audio system for a wearable device dynamically updates acoustic transfer functions. The audio system is configured to estimate a direction of arrival (DoA) of each sound source detected by a microphone array relative to a position of the wearable device within a local area. The audio system may track the movement of each sound source. The audio system may form a beam in the direction of each sound source. The audio system may identify and classify each sound source based on the sound source properties. Based on the DoA estimates, the movement tracking, and the beamforming, the audio system generates or updates the acoustic transfer functions for the sound sources.
    Type: Application
    Filed: April 9, 2019
    Publication date: October 15, 2020
    Inventors: Vladimir Tourbabin, Jacob Ryan Donley, Antonio John Miller, Ravish Mehra
  • Patent number: 10798515
    Abstract: An audio system captures audio data of test sounds through a microphone of a headset worn by a user. The test sounds are played by an external speaker, and the audio data includes audio data captured for different orientations of the headset with respect to the external speaker. A set of head-related transfer function (HRTFs) is calculated based at least in part on the audio data of the test sounds at the different orientations of the headset. A portion of the set of HRTFs is discarded to create an intermediate set of HRTFs. The discarded portion corresponding to one or more distortion regions that are based in part on wearing the headset. One or more HRTFs are generated that correspond to the discarded portion using at least some of the intermediate set of HRTFs to create an individualized set of HRTFs for the user.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: October 6, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: David Lou Alon, Maria Cuevas Rodriguez, Ravish Mehra, Philip Robinson
  • Patent number: 10757501
    Abstract: An audio system for providing content to a user. The system includes a first and a second transducer assembly of a plurality of transducer assemblies, an acoustic sensor, and a controller. The first transducer assembly couples to a portion of an auricle of the user's ear and vibrates over a first range of frequencies based on a first set of audio instructions. The vibration causes the portion of the ear to create a first range of acoustic pressure waves. The second transducer assembly is configured to vibrate over a second range of frequencies to produce a second range of acoustic pressure waves based on a second set of audio instructions. The acoustic sensor detects acoustic pressure waves at an entrance of the ear. The controller generates the audio instructions based on audio content to be provided to the user and the detected acoustic pressure waves from the acoustic sensor.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: August 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Ravish Mehra, Antonio John Miller, Morteza Khaleghimeybodi
  • Publication number: 20200245091
    Abstract: An audio system captures audio data of test sounds through a microphone of a headset worn by a user. The test sounds are played by an external speaker, and the audio data includes audio data captured for different orientations of the headset with respect to the external speaker. A set of head-related transfer function (HRTFs) is calculated based at least in part on the audio data of the test sounds at the different orientations of the headset. A portion of the set of HRTFs is discarded to create an intermediate set of HRTFs. The discarded portion corresponding to one or more distortion regions that are based in part on wearing the headset. One or more HRTFs are generated that correspond to the discarded portion using at least some of the intermediate set of HRTFs to create an individualized set of HRTFs for the user.
    Type: Application
    Filed: September 6, 2019
    Publication date: July 30, 2020
    Inventors: David Lou Alon, Maria Cuevas Rodriguez, Ravish Mehra, Philip Robinson
  • Patent number: 10728657
    Abstract: An image of at least a portion of a head of a user is received. A geometry is generated of the head wearing an eyewear device based in part on the received image of the head and a geometry of the eyewear device. The geometry of the eyewear device includes a microphone array composed of a plurality of acoustic sensors that are configured to detect sounds within a local area surrounding the microphone array. A simulation is performed of sound propagation between an audio source and the plurality of acoustic sensors based on the generated geometry. An acoustic transfer function (ATF) is determined associated with the microphone array based on the simulation. The determined ATF is customized to the user, and is provided to the eyewear device of the user.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: July 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventor: Ravish Mehra
  • Patent number: 10721581
    Abstract: A virtual reality (VR) system simulates sounds that a user of the VR system perceives to have originated from sources at desired virtual locations of the VR system. The simulated sounds are generated based on personalized head-related transfer functions (HRTF) of the user that are constructed by applying machine-learned models to a set of anatomical features identified for the user. The set of anatomical features may be identified from images of the user captured by a camera. In one instance, the HRTF is represented as a reduced set of parameters that allow the machine-learned models to capture the variability in HRTF across individual users while being trained in a computationally-efficient manner.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: July 21, 2020
    Assignee: Facebook Technologies, LLC
    Inventor: Ravish Mehra
  • Patent number: 10715909
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, where the first audio spectrum includes a set of time-frequency bins, for each time-frequency bin included in the set of time-frequency bins, computing a weighted local space-domain distance (LSDD) spectrum value based on a portion of the first audio spectrum that is included in the time-frequency bin, generating a combined spectrum value based on a set of the weighted LSDD spectrum values computed for the set of time-frequency bins, and determining a first estimated direction of the first input acoustic signal based on the combined spectrum value.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: July 14, 2020
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10679407
    Abstract: Methods, systems, and computer readable media for simulating sound propagation are disclosed. According to one method, the method includes decomposing a virtual environment scene including at least one object into a plurality of surface regions, wherein each of the surface regions includes a plurality of surface patches. The method further includes organizing sound rays generated by a sound source in the virtual environment scene into a plurality of path tracing groups, wherein each of the path tracing groups comprises a group of the rays that traverses a sequence of surface patches. The method also includes determining, for each of the path tracing groups, a sound intensity by combining a sound intensity computed for a current time with one or more previously computed sound intensities respectively associated with previous times and generating a simulated output sound at a listener position using the determined sound intensities.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: June 9, 2020
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Carl Henry Schissler, Ravish Mehra, Dinesh Manocha
  • Patent number: 10659875
    Abstract: One embodiment of the present application sets forth a computer-implemented method that includes receiving, from a first microphone, a first input acoustic signal, generating a first audio spectrum from at least the first input acoustic signal, wherein the first audio spectrum includes a set of time-frequency bins, and selecting a first time-frequency bin from the set based on a first local space-domain distance (LSDD) computed for the first time-frequency bin.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: May 19, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Vladimir Tourbabin, Ravish Mehra
  • Publication number: 20200154195
    Abstract: A method for manufacturing a cartilage conduction audio device is disclosed. A manufacturing system receives data describing a three-dimensional shape of an ear (e.g., the outer ear, behind the ear, the concha bowel, etc.) of a user. The system identifies one or more locations for one or more transducers along a back of an auricle of the ear for the user that vibrate the auricle over a frequency range causing the auricle to create an acoustic pressure wave at an entrance of the ear canal. The system then generates a design for a cartilage conduction audio device for the user based on the one or more identified locations of the transducers at which acoustic pressure waves generated by the one or more transducers satisfy a threshold performance metric for the user. The design may then be used to fabricate the cartilage conduction audio device.
    Type: Application
    Filed: January 14, 2020
    Publication date: May 14, 2020
    Inventors: Morteza Khaleghimeybodi, Antonio John Miller, Ravish Mehra
  • Patent number: 10638222
    Abstract: A system performs an optimization algorithm to optimize two or more acoustic sensors of a microphone array. The system obtains an array transfer function (ATF) for a plurality of combinations of the acoustic sensors of the microphone array. In a first embodiment, the algorithm optimizes an active set of acoustic sensors on the eyewear device. The plurality of combinations may be all possible combinations of subsets of the acoustic sensors that may be active. In a second embodiment, the algorithm optimizes a placement of two or more acoustic sensors on an eyewear device during manufacturing of the eyewear device. Each combination of acoustic sensors may represent a different arrangement of the acoustic sensors in the microphone array. In each embodiment, the system evaluates the obtained ATFs and, based on the evaluation, selects a combination of acoustic sensors for the microphone array.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: April 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Ravish Mehra, Antonio John Miller, Vladimir Tourbabin
  • Patent number: 10638252
    Abstract: An audio assembly includes a microphone assembly, a controller, and a speaker assembly. The microphone assembly detects audio signals. The detected audio signals originate from audio sources located within a local area. Each audio source is associated with a respective beamforming filter. The controller determines beamformed data using the beamforming filters associated with each audio source and a relative contribution of each of the audio sources using the beamformed data. The controller generates updated beamforming filters for each of the audio sources based in part on the relative acoustic contribution of the audio source, a current location of the audio source, and a transfer function associated with audio signals produced by the audio source. The controller generates updated beamformed data using the updated beamforming filters and performs an action (e.g., via the speaker assembly) based in part on the updated beamformed data.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: April 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Jacob Ryan Donley, Vladimir Tourbabin, Ravish Mehra
  • Patent number: 10602258
    Abstract: A method for manufacturing a cartilage conduction audio device is disclosed. A manufacturing system receives data describing a three-dimensional shape of an ear (e.g., the outer ear, behind the ear, the concha bowel, etc.) of a user. The system identifies one or more locations for one or more transducers along a back of an auricle of the ear for the user that vibrate the auricle over a frequency range causing the auricle to create an acoustic pressure wave at an entrance of the ear canal. The system then generates a design for a cartilage conduction audio device for the user based on the one or more identified locations of the transducers at which acoustic pressure waves generated by the one or more transducers satisfy a threshold performance metric for the user. The design may then be used to fabricate the cartilage conduction audio device.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: March 24, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Morteza Khaleghimeybodi, Antonio John Miller, Ravish Mehra
  • Patent number: 10572024
    Abstract: A head-mounted display (HMD) tracks a user's hand positions, orientations, and gestures using an ultrasound sensor coupled to the HMD. The ultrasound sensor emits ultrasound signals that reflect off the hands of the user, even if a hand of the user is obstructed by the other hand. The ultrasound sensor identifies features used to train a machine learning model based on detecting reflected ultrasound signals. For example, one of the features is the time delay between consecutive reflected ultrasound signals detected by the ultrasound sensor. The machine learning model learns to determine poses and gestures of the user's hands. The HMD optionally includes a camera that generates image data of the user's hands. The image data can also be used to train the machine learning model. The HMD may perform a calibration process to avoid detecting other objects and surfaces such as a wall next to the user.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: February 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Elliot Saba, Robert Y. Wang, Christopher David Twigg, Ravish Mehra