Patents by Inventor Nava K. Balsam
Nava K. Balsam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11943601Abstract: A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.Type: GrantFiled: May 27, 2022Date of Patent: March 26, 2024Assignee: Meta Platforms Technologies, LLCInventors: Andrew Lovitt, Scott Phillip Selfon, Taher Shahbazi Mirzahasanloo, Sean Allyn Coffin, Nava K Balsam, Syavosh Zadissa
-
Publication number: 20230320669Abstract: A real-time in-ear EEG signal verification system. The system includes an in-ear device (IED) configured to be placed within an ear canal of a user and a controller. The IED includes a speaker configured to present a calibration audio signal to the user, the calibration audio signal being embedded with a predetermined audible feature, and an in-ear electrode configured to be in contact with an inner surface of the ear canal. The controller is configured to instruct the speaker to present the calibration audio signal to the user, and generate neural signal data based on electrical signals from the in-ear electrode. The electrical signals correspond to brain activity of the user in response to the predetermined audible feature. The controller is configured to determine a quality of the generated neural signal data, and perform an action based on the quality of the neural signal data.Type: ApplicationFiled: April 6, 2022Publication date: October 12, 2023Inventors: Maansi Desai, Antje Ihlefeld, Morteza Khaleghimeybodi, Srinivas Kota, Nava K. Balsam
-
Patent number: 11736535Abstract: Methods, systems, and storage media for initiating communication between artificial reality devices are disclosed. Exemplary implementations may: enable a discovery setting by a first user wearing a first artificial reality device; detect a presence of at least a second user wearing a second artificial reality device; determine a familiarity level of the first user with the second user; and in response to the familiarity level breaching a familiarity threshold, initiate a call with the second artificial reality device by the first artificial reality device, the call including an interaction in an artificial reality environment accessed via the first artificial reality device and the second artificial reality device.Type: GrantFiled: October 12, 2021Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Nava K Balsam, Michaela Warnecke, Dianmin Lin, Ana Garcia Puyol
-
Patent number: 11659324Abstract: A system and method for storing data samples in discrete poses and recalling the stored data samples for updating a sound filter. The system determines that a microphone array at a first time period is in a first discrete pose of a plurality of discrete poses, wherein the plurality of discrete poses discretizes a pose space. The pose space includes at least an orientation component and may further include a translation component. The system retrieves one or more historical data samples associated with the first discrete pose, generated from sound captured by the microphone array before the first time period, and stored in a memory cache (e.g., for memorization). The system updates a sound filter for the first discrete pose using the retrieved one or more historical data samples. The system generates and presents audio content using the updated sound filter.Type: GrantFiled: October 18, 2021Date of Patent: May 23, 2023Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Jacob Ryan Donley, Nava K. Balsam, Vladimir Tourbabin
-
Publication number: 20230111835Abstract: Methods, systems, and storage media for eye tracking in artificial reality (e.g., virtual reality, augmented reality, mixed reality, etc.) headsets are disclosed. Exemplary implementations may: generate a simulated environment for a user through an artificial reality headset; track an eye of the user through an eye sensor responsive to the artificial reality headset being worn by the user; detect a physical aspect of the eye through the eye sensor indicative of an eye malady; and adjust a display setting of the artificial reality headset based at least in part on the detected physical aspect of the eye.Type: ApplicationFiled: October 12, 2021Publication date: April 13, 2023Inventors: Nava K. Balsam, Michaela Warnecke, Aminata Dia, Dianmin Lin
-
Publication number: 20230065296Abstract: A wearable device assembly is described for determining eye-tracking information of a user using generated biopotential signals on the head of the user. The eye tracking system monitors biopotential signals generated on a head of a user using electrodes that are mounted on a device that is coupled to the head of the user. The system determines eye-tracking information for the user based on the monitored biopotential signals using a machine learning model. The system performs at least one action based in part on the determined eye-tracking information.Type: ApplicationFiled: August 30, 2021Publication date: March 2, 2023Inventors: Morteza Khaleghimeybodi, Nava K. Balsam, Nils Thomas Fritiof Lunner
-
Publication number: 20230050966Abstract: A method for audio beam steering, tracking, and audio effects for an immersive reality application is provided. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a perceived direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.Type: ApplicationFiled: May 27, 2022Publication date: February 16, 2023Inventors: Andrew Lovitt, Scott Phillip Selfon, Taher Shahbazi Mirzahasanloo, Sean Allyn Coffin, Nava K Balsam, Syavosh Zadissa
-
Publication number: 20220394405Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: ApplicationFiled: August 22, 2022Publication date: December 8, 2022Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Harty Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
-
Patent number: 11457325Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: GrantFiled: July 19, 2021Date of Patent: September 27, 2022Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Samuel Clapp, Peter Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
-
Publication number: 20220182772Abstract: Embodiments relate to an audio system for various artificial reality applications. The audio system performs large scale filter optimization for audio rendering, preserving spatial and intra-population characteristics using neural networks. Further, the audio system performs adaptive hearing enhancement-aware binaural rendering. The audio includes an in-ear device with an inertial measurement unit (IMU) and a camera. The camera captures image data of a local area, and the image data is used to correct for IMU drift. In some embodiments, the audio system calculates a transducer to ear response for an individual ear using an equalization prediction or acoustic simulation framework. Individual ear pressure fields as a function of frequency are generated. Frequency-dependent directivity patterns of the transducers are characterized in the free field. In some embodiments, the audio system includes a headset and one or more removable audio apparatuses for enhancing acoustic features of the headset.Type: ApplicationFiled: February 22, 2022Publication date: June 9, 2022Inventors: Peter Harty Dodds, Nava K. Balsam, Vamsi Krishna Ithapu, William Owen Brimijoin, II, Samuel Clapp, Christi Miller, Michaela Warnecke, Nils Thomas Fritiof Lunner, Paul Thomas Calamia, Morteza Khaleghimeybodi, Pablo Francisco Faundez Hoffmann, Ravish Mehra, Salvael Ortega Estrada, Tetsuro Oishi
-
Patent number: 11290837Abstract: A system that uses persistent sound source selection to augment audio content. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound emitted by sound sources in a local area. The system further comprises an audio controller integrated into the headset. The audio controller receives sound signals corresponding to sounds emitted by sound sources in the local area. The audio controller further updates a ranking of the sound sources based on eye tracking information of the user. The audio controller further selectively applies one or more filters to the one or more of the sound signals according to the ranking to generate augmented audio data. The audio controller further provides the augmented audio data to a speaker assembly for presentation to the user.Type: GrantFiled: October 23, 2020Date of Patent: March 29, 2022Assignee: Facebook Technologies, LLCInventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Nava K. Balsam
-
Publication number: 20220021996Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: ApplicationFiled: July 19, 2021Publication date: January 20, 2022Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke