Patents by Inventor Brian Lloyd Schmidt

Brian Lloyd Schmidt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11190867
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: November 30, 2021
    Assignee: MAGIC LEAP, INC.
    Inventors: Brian Lloyd Schmidt, David Thomas Roach, Michael Z. Land, Richard D. Herr
  • Patent number: 11134357
    Abstract: An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: September 28, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Brian Lloyd Schmidt, Samuel Charles Dicker
  • Publication number: 20210160647
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Application
    Filed: November 4, 2020
    Publication date: May 27, 2021
    Inventors: Brian Lloyd SCHMIDT, Jehangir Tajik, Jean-Marc Jot
  • Publication number: 20210152970
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reflection of the input audio signal in a surface of the virtual environment.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 20, 2021
    Inventors: Jean-Marc JOT, Samuel Charles Dicker, Brian Lloyd Schmidt, Remi Samuel Audfray
  • Patent number: 10863301
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: December 8, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Brian Lloyd Schmidt, Jehangir Tajik, Jean-Marc Jot
  • Patent number: 10863300
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reflection of the input audio signal in a surface of the virtual environment.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: December 8, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Jean-Marc Jot, Samuel Charles Dicker, Brian Lloyd Schmidt, Remi Samuel Audfray
  • Patent number: 10838210
    Abstract: A display system can include a head-mounted display configured to project light to an eye of a user to display augmented reality image content to the user. The display system can include one or more user sensors configured to sense the user and can include one or more environmental sensors configured to sense surroundings of the user. The display system can also include processing electronics in communication with the display, the one or more user sensors, and the one or more environmental sensors. The processing electronics can be configured to sense a situation involving user focus, determine user intent for the situation, and alter user perception of a real or virtual object within the vision field of the user based at least in part on the user intent and/or sensed situation involving user focus. The processing electronics can be configured to at least one of enhance or de-emphasize the user perception of the real or virtual object within the vision field of the user.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: November 17, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Nastasja U. Robaina, Nicole Elizabeth Samec, Christopher M. Harrises, Rony Abovitz, Mark Baerenrodt, Brian Lloyd Schmidt
  • Publication number: 20200260208
    Abstract: An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
    Type: Application
    Filed: April 28, 2020
    Publication date: August 13, 2020
    Inventors: Brian Lloyd SCHMIDT, Samuel Charles DICKER
  • Publication number: 20200196087
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Application
    Filed: February 27, 2020
    Publication date: June 18, 2020
    Inventors: Brian Lloyd Schmidt, Jehangir Tajik, Jean-Marc Jot
  • Publication number: 20200183171
    Abstract: A display system can include a head-mounted display configured to project light to an eye of a user to display augmented reality image content to the user. The display system can include one or more user sensors configured to sense the user and can include one or more environmental sensors configured to sense surroundings of the user. The display system can also include processing electronics in communication with the display, the one or more user sensors, and the one or more environmental sensors. The processing electronics can be configured to sense a situation involving user focus, determine user intent for the situation, and alter user perception of a real or virtual object within the vision field of the user based at least in part on the user intent and/or sensed situation involving user focus. The processing electronics can be configured to at least one of enhance or de-emphasize the user perception of the real or virtual object within the vision field of the user.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Nastasja U. Robaina, Nicole Elizabeth Samec, Christopher M. Harrises, Rony Abovitz, Mark Baerenrodt, Brian Lloyd Schmidt
  • Patent number: 10667072
    Abstract: An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: May 26, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Brian Lloyd Schmidt, Samuel Charles Dicker
  • Publication number: 20200112813
    Abstract: Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.
    Type: Application
    Filed: December 4, 2019
    Publication date: April 9, 2020
    Inventors: George A. Sanger, Brian Lloyd Schmidt, Anastasia A. Tajik, Terry Micheal O'Gara, David Matthew Shumway, Alan Steven Howarth
  • Patent number: 10616705
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: April 7, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Brian Lloyd Schmidt, Jehangir Tajik, Jean-Marc Jot
  • Patent number: 10531220
    Abstract: Systems and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems may include a plurality of distributed monitoring devices in an environment, each having a microphone and a location tracking unit. The system can capture audio signals while also capturing location tracking signals which indicate the locations of the monitoring devices over time during capture of the audio signals. The system can generate a representation of at least a portion of a sound wave field in the environment based on the audio signals and the location tracking signals. The system may also determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: January 7, 2020
    Assignee: Magic Leap, Inc.
    Inventors: George A. Sanger, Brian Lloyd Schmidt, Anastasia A. Tajik, Terry Micheal O'Gara, David Matthew Shumway, Alan Steven Howarth
  • Publication number: 20190387352
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reflection of the input audio signal in a surface of the virtual environment.
    Type: Application
    Filed: June 18, 2019
    Publication date: December 19, 2019
    Inventors: Jean-Marc JOT, Samuel Charles DICKER, Brian Lloyd SCHMIDT, Remi Samuel AUDFRAY
  • Publication number: 20190387350
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment.
    Type: Application
    Filed: June 18, 2019
    Publication date: December 19, 2019
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Brian Lloyd SCHMIDT
  • Publication number: 20190379992
    Abstract: An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 12, 2019
    Inventors: Brian Lloyd SCHMIDT, Samuel Charles Dicker
  • Publication number: 20190116448
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Application
    Filed: October 17, 2018
    Publication date: April 18, 2019
    Inventors: Brian Lloyd SCHMIDT, Jehangir TAJIK, Jean-Marc JOT
  • Publication number: 20190011703
    Abstract: A display system can include a head-mounted display configured to project light to an eye of a user to display augmented reality image content to the user. The display system can include one or more user sensors configured to sense the user and can include one or more environmental sensors configured to sense surroundings of the user. The display system can also include processing electronics in communication with the display, the one or more user sensors, and the one or more environmental sensors. The processing electronics can be configured to sense a situation involving user focus, determine user intent for the situation, and alter user perception of a real or virtual object within the vision field of the user based at least in part on the user intent and/or sensed situation involving user focus. The processing electronics can be configured to at least one of enhance or de-emphasize the user perception of the real or virtual object within the vision field of the user.
    Type: Application
    Filed: July 24, 2017
    Publication date: January 10, 2019
    Inventors: Nastasja U. Robaina, Nicole Elizabeth Samec, Christopher M. Harrises, Rony Abovitz, Mark Baerenrodt, Brian Lloyd Schmidt
  • Patent number: 10157625
    Abstract: The subject disclosure is directed towards a technology that may be used in an audio processing environment. Nodes of an audio flow graph are associated with virtual mix buffers. As the flow graph is processed, commands and virtual mix buffer data are provided to audio fixed-function processing blocks. Each virtual mix buffer is mapped to a physical mix buffer, and the associated command is executed with respect to the physical mix buffer. One physical mix buffer mix buffer may be used as an input data buffer for the audio fixed-function processing block, and another physical mix buffer as an output data buffer, for example.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: December 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John A. Tardif, Brian Lloyd Schmidt, Sunil Kumar Vemula, Robert N. Heitkamp