Patents by Inventor Jean-Marc Jot

Jean-Marc Jot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230245642
    Abstract: Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
    Type: Application
    Filed: April 6, 2023
    Publication date: August 3, 2023
    Inventors: Remi Samuel AUDFRAY, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20230239651
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Application
    Filed: March 8, 2023
    Publication date: July 27, 2023
    Inventors: Remi Samuel AUDFRAY, Mark Brandon HERTENSTEINER, Samuel Charles DICKER, Blaine Ivin WOOD, Michael Z. LAND, Jean-Marc JOT
  • Publication number: 20230217205
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: March 10, 2023
    Publication date: July 6, 2023
    Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
  • Patent number: 11678117
    Abstract: A method of processing an audio signal is disclosed. According to embodiments of the method, magnitude response information of a prototype filter is determined. The magnitude response information includes a plurality of gain values, at least one of which includes a first gain corresponding to a first frequency. The magnitude response information of the prototype filter is stored. The magnitude response information of the prototype filter at the first frequency is retrieved. Gains are computed for a plurality of control frequencies based on the retrieved magnitude response information of the prototype filter at the first frequency, and the computed gains are applied to the audio signal.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: June 13, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Patent number: 11651762
    Abstract: Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: May 16, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20230128286
    Abstract: Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
    Type: Application
    Filed: December 23, 2022
    Publication date: April 27, 2023
    Inventors: Jean-Marc JOT, Michael MINNICK, Dmitry PASTOUCHENKO, Michael Aaron SIMON, John Emmitt SCOTT, III, Richard St. Clair BAILEY, Shivakumar BALASUBRAMANYAM, Harsharaj AGADI
  • Publication number: 20230121353
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 20, 2023
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER
  • Patent number: 11632646
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: April 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Patent number: 11627430
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors configured to execute a method. A method for execution by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 11, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Mark Brandon Hertensteiner, Samuel Charles Dicker, Blaine Ivin Wood, Michael Z. Land, Jean-Marc Jot
  • Patent number: 11627428
    Abstract: Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: April 11, 2023
    Inventors: Jean-Marc Jot, Michael Minnick, Dmitry Pastouchenko, Michael Aaron Simon, John Emmitt Scott, III, Richard St. Clair Bailey, Shivakumar Balasubramanyam, Harsharaj Agadi
  • Publication number: 20230094733
    Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
    Type: Application
    Filed: December 2, 2022
    Publication date: March 30, 2023
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA
  • Publication number: 20230077524
    Abstract: Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
    Type: Application
    Filed: November 22, 2022
    Publication date: March 16, 2023
    Inventors: Mathieu PARVAIX, Jean-Marc JOT, Colby Nelson LEIDER
  • Patent number: 11570570
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: January 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20230007332
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
    Type: Application
    Filed: September 12, 2022
    Publication date: January 5, 2023
    Inventors: Anastasia Andreyevna TAJIK, Jean-Marc JOT
  • Patent number: 11546716
    Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: January 3, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
  • Publication number: 20220417686
    Abstract: Systems and methods for rendering audio signals are disclosed. In some embodiments, a method may receive an input signal including a first portion and the second portion. A first processing stage comprising a first filter is applied to the first portion to generate a first filtered signal. A second processing stage comprising a second filter is applied to the first portion to generate a second filtered signal. A third processing stage comprising a third filter is applied to the second portion to generate a third filtered signal. A fourth processing stage comprising a fourth filter is applied to the second portion to generate a fourth filtered signal. A first output signal is determined based on a sum of the first filtered signal and the third filtered signal. A second output signal is determined based on a sum of the second filtered signal and the fourth filtered signal.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 29, 2022
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Patent number: 11540072
    Abstract: Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: December 27, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Mathieu Parvaix, Jean-Marc Jot, Colby Nelson Leider
  • Publication number: 20220386065
    Abstract: Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
    Type: Application
    Filed: March 2, 2021
    Publication date: December 1, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Jean-Marc JOT, Michael MINNICK, Dmitry PASTOUCHENKO, Michael Aaron SIMON, John Emmitt SCOTT III, Richard St. Clair BAILEY, Shivakumar BALASUBRAMANYAM, Harsharaj AGADI
  • Patent number: 11477592
    Abstract: Systems and methods for rendering audio signals are disclosed. In some embodiments, a method may receive an input signal including a first portion and the second portion. A first processing stage comprising a first filter is applied to the first portion to generate a first filtered signal. A second processing stage comprising a second filter is applied to the first portion to generate a second filtered signal. A third processing stage comprising a third filter is applied to the second portion to generate a third filtered signal. A fourth processing stage comprising a fourth filter is applied to the second portion to generate a fourth filtered signal. A first output signal is determined based on a sum of the first filtered signal and the third filtered signal. A second output signal is determined based on a sum of the second filtered signal and the fourth filtered signal.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: October 18, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Patent number: 11477510
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: October 18, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Anastasia Andreyevna Tajik, Jean-Marc Jot