Patents by Inventor William Owen Brimijoin, II

William Owen Brimijoin, II has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11943602
    Abstract: A system can include a position sensor configured to output position data of a HWD. The system can include one or more processors configured to identify a first head angle of the HWD using the position sensor, generate an audio signal using the first head angle, identify a second head angle of the HWD using the position sensor, determine an angle error based at least on the first head angle and the second head angle, and apply at least one of a time difference or a level difference to the audio signal based at least on the angle error to adjust the audio signal. The system can include an audio output device configured to output the adjusted audio signal. By adjusting the audio signal using the angle error, the system can correct for long spatial update latencies and reduce the perceptual impact of such latencies for the user.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: March 26, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: William Owen Brimijoin, II, Henrik Gert Hassager, Sebastià Vicenç Amengual Garí
  • Publication number: 20240056733
    Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.
    Type: Application
    Filed: October 26, 2023
    Publication date: February 15, 2024
    Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
  • Patent number: 11843926
    Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 12, 2023
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
  • Patent number: 11751003
    Abstract: Embodiments relate to personalization of a head-related transfer function (HRTF) for a given user. A sound source is spatialized for an initial position using an initial version of a HRTF to obtain an initial spatialized sound source. Upon presentation of the initial spatialized sound source, at least one property of the HRTF is adjusted in an iterative manner based on at least one perceptive response from the user to generate a version of the HRTF customized for the user. Each perceptive response from the user indicates a respective offset between a perceived position and a target position of the sound source. The customized version of the HRTF is applied to one or more audio channels to form spatialized audio content for the perceived position. The spatialized audio content is presented to the user, wherein the offset between the perceived position and the target position is reduced.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: September 5, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: William Owen Brimijoin, II, Tomasz Rudzki, Sebastià Vicenç Amegnual Gari, Michaela Warnecke, Andrew Francl
  • Patent number: 11670321
    Abstract: A system includes a headset to capture sound and a visual signal of a local area including one or more sound sources. The system determines a strength of the audio signal and a portion of the visual signal associated with the audio signal, compares the strengths, selects the weaker signal, and augments the weaker signal. The headset accordingly presents augmented audio-visual content to a user, thereby enhancing the user's perception of the weak signal.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Cesare Valerio Parise, William Owen Brimijoin, II, Philip Robinson
  • Patent number: 11644894
    Abstract: A method comprising determining a set of position parameters for an inertial measurement unit (IMU) on a headset worn by a user. The set of position parameters includes at least a first yaw measurement and a first roll measurement. The set describes a pointing vector. The method further comprises calculating a drift correction component that describes a rate of correction. The drift correction component is based at least in part on the set of position parameters. The method further comprises applying the drift correction component to one or more subsequent yaw measurements for the IMU. The drift correction component forces an estimated nominal position vector to the pointing vector at the rate of correction.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: May 9, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: William Owen Brimijoin, II, Andrew Lovitt
  • Patent number: 11616580
    Abstract: A system receives audio data in a frequency range of 20 Hz-20 kHz. The received audio data is encoded by the system into ultrasonic data in frequencies that are greater than 20 kHz, and transmitted into a local area that is proximal to the transmitting device, i.e., within the transmission range of the transmitting device. An ultrasonic communication device that is located in the transmission range of the transmitting device may receive the ultrasonic data. The received ultrasonic data is decoded by the ultrasonic communication system in the receiving device into audio data in a frequency range of 20 Hz-20 kHz, and subsequently presented to a user of the receiving ultrasonic communication device.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: March 28, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Peter Harty Dodds, Morteza Khaleghimeybodi, Philip Robinson, Scott Phillips Porter, William Owen Brimijoin, II, Andrew Lovitt
  • Patent number: 11579837
    Abstract: A system creates an audio profile. The audio profile may be stored in a database. For example, the audio profile may be securely stored in a database of a social network and associated with a user account. The audio profile may contain data describing the way in which the specific user hears and interprets sounds. Systems and applications which present sounds to the user may access the audio profile and modify the sounds presented to the user based on the data in the audio profile to enhance the audio experience for the user.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: February 14, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Philip Robinson, Antonio John Miller, William Owen Brimijoin, II, Andrew Lovitt
  • Publication number: 20220394405
    Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 8, 2022
    Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Harty Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
  • Patent number: 11523240
    Abstract: An audio system generates customized head-related transfer functions (HRTFs) for a user. The audio system receives an initial set of estimated HRTFs. The initial set of HRTFs may have been estimated using a trained machine learning and computer vision system and pictures of the user's ears. The audio system generates a set of test locations using the initial set of HRTFs. The audio system presents test sounds at each of the initial set of test locations using the initial set of HRTFs. The audio system monitors user responses to the test sounds. The audio system uses the monitored responses to generate a new set of estimated HRTFs and a new set of test locations. The process repeats until a threshold accuracy is achieved or until a set period of time expires. The audio system presents audio content to the user using the customized HRTFs.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: December 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Vamsi Krishna Ithapu, William Owen Brimijoin, II, Henrik Gert Hassager
  • Publication number: 20220342213
    Abstract: Embodiments relate to an audio system for various audio applications. The audio system registers the locations of one or more sound sources and selects the target sound source based on a hidden Markov model. A health monitoring system that integrates an audio system may use information collected by sensors to monitor an amount of social interaction of a user and predict a risk of dementia and/or hearing loss based on a model. The audio system uses a current/voltage sensor to detect electrical drive signals for determining a level of audio leakage of the audio system. Additionally, the audio system may update a video stream with an audio background based on an artificial visual background in the video stream so that the updated video stream sounds as if it originated from the user being located in a physical representation related to the background.
    Type: Application
    Filed: July 8, 2022
    Publication date: October 27, 2022
    Inventors: Hao Lu, William Owen Brimijoin, II, Christi Miller, Limin Zhou, Andrew Lovitt
  • Patent number: 11457325
    Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: September 27, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke
  • Patent number: 11409360
    Abstract: A method comprising determining a set of position parameters for an inertial measurement unit (IMU) on a headset worn by a user. The set of position parameters includes at least a first yaw measurement and a first roll measurement. The set describes a pointing vector. The method further comprises calculating a drift correction component that describes a rate of correction. The drift correction component is based at least in part on the set of position parameters. The method further comprises applying the drift correction component to one or more subsequent yaw measurements for the IMU. The drift correction component forces an estimated nominal position vector to the pointing vector at the rate of correction.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: William Owen Brimijoin, II, Andrew Lovitt
  • Patent number: 11368231
    Abstract: A system receives audio data in a frequency range of 20 Hz-20 kHz. The received audio data is encoded by the system into ultrasonic data in frequencies that are greater than 20 kHz, and transmitted into a local area that is proximal to the transmitting device, i.e., within the transmission range of the transmitting device. An ultrasonic communication device that is located in the transmission range of the transmitting device may receive the ultrasonic data. The received ultrasonic data is decoded by the ultrasonic communication system in the receiving device into audio data in a frequency range of 20 Hz-20 kHz, and subsequently presented to a user of the receiving ultrasonic communication device.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: June 21, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Peter Harty Dodds, Morteza Khaleghimeybodi, Philip Robinson, Scott Porter, William Owen Brimijoin, II, Andrew Lovitt
  • Publication number: 20220182772
    Abstract: Embodiments relate to an audio system for various artificial reality applications. The audio system performs large scale filter optimization for audio rendering, preserving spatial and intra-population characteristics using neural networks. Further, the audio system performs adaptive hearing enhancement-aware binaural rendering. The audio includes an in-ear device with an inertial measurement unit (IMU) and a camera. The camera captures image data of a local area, and the image data is used to correct for IMU drift. In some embodiments, the audio system calculates a transducer to ear response for an individual ear using an equalization prediction or acoustic simulation framework. Individual ear pressure fields as a function of frequency are generated. Frequency-dependent directivity patterns of the transducers are characterized in the free field. In some embodiments, the audio system includes a headset and one or more removable audio apparatuses for enhancing acoustic features of the headset.
    Type: Application
    Filed: February 22, 2022
    Publication date: June 9, 2022
    Inventors: Peter Harty Dodds, Nava K. Balsam, Vamsi Krishna Ithapu, William Owen Brimijoin, II, Samuel Clapp, Christi Miller, Michaela Warnecke, Nils Thomas Fritiof Lunner, Paul Thomas Calamia, Morteza Khaleghimeybodi, Pablo Francisco Faundez Hoffmann, Ravish Mehra, Salvael Ortega Estrada, Tetsuro Oishi
  • Publication number: 20220116705
    Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
  • Patent number: 11290837
    Abstract: A system that uses persistent sound source selection to augment audio content. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound emitted by sound sources in a local area. The system further comprises an audio controller integrated into the headset. The audio controller receives sound signals corresponding to sounds emitted by sound sources in the local area. The audio controller further updates a ranking of the sound sources based on eye tracking information of the user. The audio controller further selectively applies one or more filters to the one or more of the sound signals according to the ranking to generate augmented audio data. The audio controller further provides the augmented audio data to a speaker assembly for presentation to the user.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: March 29, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Nava K. Balsam
  • Patent number: 11245984
    Abstract: A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: February 8, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: William Owen Brimijoin, II, Nils Thomas Fritiof Lunner, Philip Robinson, Ravish Mehra
  • Patent number: 11234096
    Abstract: A system for generating individualized HRTFs that are customized to a user of a headset. The system includes a server and an audio system. The server determines the individualized HRTFs based in part on acoustic features data (e.g., image data, anthropometric features, etc.) of the user and a template HRTF. The server provides the individualized HRTFs to the audio system. The audio system presents spatialized audio content to the user using the individualized HRTFs.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: January 25, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: William Owen Brimijoin, II, Henrik Gert Hassager, Vamsi Krishna Ithapu, Philip Robinson
  • Publication number: 20220021996
    Abstract: A system is disclosed for using an audio time and level difference renderer (TLDR) to generate spatialized audio content for multiple channels from an audio signal received at a single channel. The system selects an audio TLDR from a set of audio TLDRs based on received input parameters. The system configures the selected audio TLDR based on received input parameters using a filter parameter model to generate a configured audio TLDR that comprises a set of configured binaural dynamic filters, and a configured delay between the multiple channel. The system applies the configured audio TLDR to an audio signal received at the single channel to generate spatialized multiple channel audio content for each channel of the multiple audio channel and presents the generated spatialized audio content at multiple channels to a user via a headset.
    Type: Application
    Filed: July 19, 2021
    Publication date: January 20, 2022
    Inventors: William Owen Brimijoin, II, Samuel Clapp, Peter Dodds, Nava K. Balsam, Tomasz Rudzki, Ryan Rohrer, Kevin Scheumann, Michaela Warnecke