Patents by Inventor William Owen Brimijoin, II
William Owen Brimijoin, II has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12219331Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.Type: GrantFiled: October 26, 2023Date of Patent: February 4, 2025Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
-
Publication number: 20240305951Abstract: Techniques for determining personalized head-related transfer functions (HRTFs) using a head-mounted device and in-ear devices include: receiving, from a sensor array of the head-mounted device, a first sound signal associated with a sound from a sound source in a local environment of a user of the head-mounted device; determining that reverberation characteristics and spectral characteristics of the sound meet predetermined criteria based on the first sound signal; determining that the sound source is stationary within a time period; determining a relative location of the sound source with respect to the user; receiving, from an in-ear device in an ear of the user, a second sound signal associated with the sound from the sound source; and determining, based on at least the second sound signal, an HRTF or one or more parameters of the HRTF associated with the relative location of the sound source for the user.Type: ApplicationFiled: March 6, 2024Publication date: September 12, 2024Inventors: Andrew FRANCL, Tobias Daniel KABZINSKI, Hao LU, Antje IHLEFELD, William Owen BRIMIJOIN, II
-
Patent number: 12069463Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: GrantFiled: August 22, 2022Date of Patent: August 20, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: William Owen Brimijoin, II, Samuel Clapp, Peter Harty Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
-
Publication number: 20240236602Abstract: Techniques are described for generating spatialized audio content based on head orientation information and torso orientation information obtained from a drift compensation system. The drift compensation system compensates for drift in measurements performed by an inertial measurement unit (IMU), based on biological constraints pertaining to a user. The head orientation information is applied to a first set of filters to generate intermediate audio content. In turn, the torso orientation information and the intermediate audio content are applied to a second set of filters to generate the spatialized audio content. The first set of filters includes one or more audio filters that receive an input audio signal corresponding to a left channel. The second set of filters includes one or more audio filters that receive an input audio signal corresponding to a right channel. The spatialized audio content includes separate output signals for the left channel and the right channel.Type: ApplicationFiled: March 21, 2024Publication date: July 11, 2024Inventors: Philip ROBINSON, William Owen BRIMIJOIN, II, Antje IHLEFELD
-
Patent number: 11943602Abstract: A system can include a position sensor configured to output position data of a HWD. The system can include one or more processors configured to identify a first head angle of the HWD using the position sensor, generate an audio signal using the first head angle, identify a second head angle of the HWD using the position sensor, determine an angle error based at least on the first head angle and the second head angle, and apply at least one of a time difference or a level difference to the audio signal based at least on the angle error to adjust the audio signal. The system can include an audio output device configured to output the adjusted audio signal. By adjusting the audio signal using the angle error, the system can correct for long spatial update latencies and reduce the perceptual impact of such latencies for the user.Type: GrantFiled: August 23, 2021Date of Patent: March 26, 2024Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Henrik Gert Hassager, Sebastià Vicenç Amengual Garí
-
Publication number: 20240056733Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.Type: ApplicationFiled: October 26, 2023Publication date: February 15, 2024Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
-
Patent number: 11843926Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.Type: GrantFiled: December 21, 2021Date of Patent: December 12, 2023Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
-
Patent number: 11751003Abstract: Embodiments relate to personalization of a head-related transfer function (HRTF) for a given user. A sound source is spatialized for an initial position using an initial version of a HRTF to obtain an initial spatialized sound source. Upon presentation of the initial spatialized sound source, at least one property of the HRTF is adjusted in an iterative manner based on at least one perceptive response from the user to generate a version of the HRTF customized for the user. Each perceptive response from the user indicates a respective offset between a perceived position and a target position of the sound source. The customized version of the HRTF is applied to one or more audio channels to form spatialized audio content for the perceived position. The spatialized audio content is presented to the user, wherein the offset between the perceived position and the target position is reduced.Type: GrantFiled: October 11, 2021Date of Patent: September 5, 2023Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Tomasz Rudzki, Sebastià Vicenç Amegnual Gari, Michaela Warnecke, Andrew Francl
-
Patent number: 11670321Abstract: A system includes a headset to capture sound and a visual signal of a local area including one or more sound sources. The system determines a strength of the audio signal and a portion of the visual signal associated with the audio signal, compares the strengths, selects the weaker signal, and augments the weaker signal. The headset accordingly presents augmented audio-visual content to a user, thereby enhancing the user's perception of the weak signal.Type: GrantFiled: June 28, 2021Date of Patent: June 6, 2023Assignee: Meta Platforms Technologies, LLCInventors: Cesare Valerio Parise, William Owen Brimijoin, II, Philip Robinson
-
Patent number: 11644894Abstract: A method comprising determining a set of position parameters for an inertial measurement unit (IMU) on a headset worn by a user. The set of position parameters includes at least a first yaw measurement and a first roll measurement. The set describes a pointing vector. The method further comprises calculating a drift correction component that describes a rate of correction. The drift correction component is based at least in part on the set of position parameters. The method further comprises applying the drift correction component to one or more subsequent yaw measurements for the IMU. The drift correction component forces an estimated nominal position vector to the pointing vector at the rate of correction.Type: GrantFiled: June 22, 2022Date of Patent: May 9, 2023Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Andrew Lovitt
-
Patent number: 11616580Abstract: A system receives audio data in a frequency range of 20 Hz-20 kHz. The received audio data is encoded by the system into ultrasonic data in frequencies that are greater than 20 kHz, and transmitted into a local area that is proximal to the transmitting device, i.e., within the transmission range of the transmitting device. An ultrasonic communication device that is located in the transmission range of the transmitting device may receive the ultrasonic data. The received ultrasonic data is decoded by the ultrasonic communication system in the receiving device into audio data in a frequency range of 20 Hz-20 kHz, and subsequently presented to a user of the receiving ultrasonic communication device.Type: GrantFiled: May 9, 2022Date of Patent: March 28, 2023Assignee: Meta Platforms Technologies, LLCInventors: Peter Harty Dodds, Morteza Khaleghimeybodi, Philip Robinson, Scott Phillips Porter, William Owen Brimijoin, II, Andrew Lovitt
-
Patent number: 11579837Abstract: A system creates an audio profile. The audio profile may be stored in a database. For example, the audio profile may be securely stored in a database of a social network and associated with a user account. The audio profile may contain data describing the way in which the specific user hears and interprets sounds. Systems and applications which present sounds to the user may access the audio profile and modify the sounds presented to the user based on the data in the audio profile to enhance the audio experience for the user.Type: GrantFiled: March 9, 2021Date of Patent: February 14, 2023Assignee: Meta Platforms Technologies, LLCInventors: Philip Robinson, Antonio John Miller, William Owen Brimijoin, II, Andrew Lovitt
-
Publication number: 20220394405Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: ApplicationFiled: August 22, 2022Publication date: December 8, 2022Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Harty Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
-
Patent number: 11523240Abstract: An audio system generates customized head-related transfer functions (HRTFs) for a user. The audio system receives an initial set of estimated HRTFs. The initial set of HRTFs may have been estimated using a trained machine learning and computer vision system and pictures of the user's ears. The audio system generates a set of test locations using the initial set of HRTFs. The audio system presents test sounds at each of the initial set of test locations using the initial set of HRTFs. The audio system monitors user responses to the test sounds. The audio system uses the monitored responses to generate a new set of estimated HRTFs and a new set of test locations. The process repeats until a threshold accuracy is achieved or until a set period of time expires. The audio system presents audio content to the user using the customized HRTFs.Type: GrantFiled: March 31, 2021Date of Patent: December 6, 2022Assignee: Meta Platforms Technologies, LLCInventors: Vamsi Krishna Ithapu, William Owen Brimijoin, II, Henrik Gert Hassager
-
Publication number: 20220342213Abstract: Embodiments relate to an audio system for various audio applications. The audio system registers the locations of one or more sound sources and selects the target sound source based on a hidden Markov model. A health monitoring system that integrates an audio system may use information collected by sensors to monitor an amount of social interaction of a user and predict a risk of dementia and/or hearing loss based on a model. The audio system uses a current/voltage sensor to detect electrical drive signals for determining a level of audio leakage of the audio system. Additionally, the audio system may update a video stream with an audio background based on an artificial visual background in the video stream so that the updated video stream sounds as if it originated from the user being located in a physical representation related to the background.Type: ApplicationFiled: July 8, 2022Publication date: October 27, 2022Inventors: Hao Lu, William Owen Brimijoin, II, Christi Miller, Limin Zhou, Andrew Lovitt
-
Patent number: 11457325Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.Type: GrantFiled: July 19, 2021Date of Patent: September 27, 2022Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Samuel Clapp, Peter Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
-
Patent number: 11409360Abstract: A method comprising determining a set of position parameters for an inertial measurement unit (IMU) on a headset worn by a user. The set of position parameters includes at least a first yaw measurement and a first roll measurement. The set describes a pointing vector. The method further comprises calculating a drift correction component that describes a rate of correction. The drift correction component is based at least in part on the set of position parameters. The method further comprises applying the drift correction component to one or more subsequent yaw measurements for the IMU. The drift correction component forces an estimated nominal position vector to the pointing vector at the rate of correction.Type: GrantFiled: January 28, 2020Date of Patent: August 9, 2022Assignee: Meta Platforms Technologies, LLCInventors: William Owen Brimijoin, II, Andrew Lovitt
-
Patent number: 11368231Abstract: A system receives audio data in a frequency range of 20 Hz-20 kHz. The received audio data is encoded by the system into ultrasonic data in frequencies that are greater than 20 kHz, and transmitted into a local area that is proximal to the transmitting device, i.e., within the transmission range of the transmitting device. An ultrasonic communication device that is located in the transmission range of the transmitting device may receive the ultrasonic data. The received ultrasonic data is decoded by the ultrasonic communication system in the receiving device into audio data in a frequency range of 20 Hz-20 kHz, and subsequently presented to a user of the receiving ultrasonic communication device.Type: GrantFiled: December 21, 2018Date of Patent: June 21, 2022Assignee: Facebook Technologies, LLCInventors: Peter Harty Dodds, Morteza Khaleghimeybodi, Philip Robinson, Scott Porter, William Owen Brimijoin, II, Andrew Lovitt
-
Publication number: 20220182772Abstract: Embodiments relate to an audio system for various artificial reality applications. The audio system performs large scale filter optimization for audio rendering, preserving spatial and intra-population characteristics using neural networks. Further, the audio system performs adaptive hearing enhancement-aware binaural rendering. The audio includes an in-ear device with an inertial measurement unit (IMU) and a camera. The camera captures image data of a local area, and the image data is used to correct for IMU drift. In some embodiments, the audio system calculates a transducer to ear response for an individual ear using an equalization prediction or acoustic simulation framework. Individual ear pressure fields as a function of frequency are generated. Frequency-dependent directivity patterns of the transducers are characterized in the free field. In some embodiments, the audio system includes a headset and one or more removable audio apparatuses for enhancing acoustic features of the headset.Type: ApplicationFiled: February 22, 2022Publication date: June 9, 2022Inventors: Peter Harty Dodds, Nava K. Balsam, Vamsi Krishna Ithapu, William Owen Brimijoin, II, Samuel Clapp, Christi Miller, Michaela Warnecke, Nils Thomas Fritiof Lunner, Paul Thomas Calamia, Morteza Khaleghimeybodi, Pablo Francisco Faundez Hoffmann, Ravish Mehra, Salvael Ortega Estrada, Tetsuro Oishi
-
Publication number: 20220116705Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra