Patents by Inventor Jean-Marc Jot

Jean-Marc Jot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220240044
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: April 12, 2022
    Publication date: July 28, 2022
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Publication number: 20220230658
    Abstract: In some embodiments, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred, and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.
    Type: Application
    Filed: April 6, 2022
    Publication date: July 21, 2022
    Inventors: Jung-Suk LEE, Jean-Marc Jot
  • Patent number: 11337023
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Patent number: 11328740
    Abstract: In some embodiments, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred, and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: May 10, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Jung-Suk Lee, Jean-Marc Jot
  • Publication number: 20220130370
    Abstract: Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
    Type: Application
    Filed: January 4, 2022
    Publication date: April 28, 2022
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER
  • Patent number: 11304017
    Abstract: Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: April 12, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Mathieu Parvaix, Jean-Marc Jot, Colby Nelson Leider
  • Patent number: 11304020
    Abstract: Systems and methods can provide an elevated, virtual loudspeaker source in a three-dimensional soundfield using loudspeakers in a horizontal plane. In an example, a processor circuit can receive at least one height audio signal that includes information intended for reproduction using a loudspeaker that is elevated relative to a listener, and optionally offset from the listener's facing direction by a specified azimuth angle. A first virtual height filter can be selected for use based on the specified azimuth angle. A virtualized audio signal can be generated by applying the first virtual height filter to the at least one height audio signal. When the virtualized audio signal is reproduced using one or more loudspeakers in the horizontal plane, the virtualized audio signal can be perceived by the listener as originating from an elevated loudspeaker source that corresponds to the azimuth angle.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: April 12, 2022
    Assignee: DTS, Inc.
    Inventors: Jean-Marc Jot, Daekyoung Noh, Ryan James Cassidy, Themis George Katsianos, Oveal Walker
  • Patent number: 11252528
    Abstract: A system and method for providing low interaural coherence at low frequencies is disclosed. In some embodiments, the system may include a reverberator and a low-frequency interaural coherence control system. The reverberator may include two sets of comb filters, one for the left ear output signal and one for the right ear output signal. The low-frequency interaural coherence control system can include a plurality of sections, each section can be configured to control a certain frequency range of the signals that propagate through the given section. The sections may include a left high-frequency section for the left ear output signal and a right high-frequency section for the right ear output signal. The sections may also include a shared low-frequency section that can output signals to be combined by combiners of the left and right high-frequency sections.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: February 15, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot
  • Patent number: 11250834
    Abstract: Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: February 15, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20220038840
    Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.
    Type: Application
    Filed: August 12, 2021
    Publication date: February 3, 2022
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA
  • Patent number: 11190877
    Abstract: Embodiments of systems and methods are described for reducing undesired leakage energy produced by a non-front-facing speaker in a multi-speaker system. For example, the multi-speaker system can include an array of forward-facing speakers, one or more upward-facing speakers, and/or one or more side-facing speakers. Filters coupled to any two of the speakers in the multi-speaker system can generate audio signals output by the coupled speakers to reduce, attenuate, or cancel a portion of an audio signal output by one or more non-front-facing speakers that acoustically propagates along a direct path from the respective non-front-facing speaker to a listening position in a listening area in front of the multi-speaker system.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: November 30, 2021
    Assignee: DTS, Inc.
    Inventors: Suketu Kamdar, Zesen Zhuang, Martin Walsh, Edward Stein, Michael M. Goodwin, Jean-Marc Jot
  • Publication number: 20210306751
    Abstract: Disclosed herein are systems and methods for processing speech signals in mixed reality applications. A method may include receiving an audio signal; determining, via first processors, whether the audio signal comprises a voice onset event; in accordance with a determination that the audio signal comprises the voice onset event: waking a second one or more processors; determining, via the second processors, that the audio signal comprises a predetermined trigger signal; in accordance with a determination that the audio signal comprises the predetermined trigger signal: waking third processors; performing, via the third processors, automatic speech recognition based on the audio signal; and in accordance with a determination that the audio signal does not comprise the predetermined trigger signal: forgoing waking the third processors; and in accordance with a determination that the audio signal does not comprise the voice onset event: forgoing waking the second processors.
    Type: Application
    Filed: March 26, 2021
    Publication date: September 30, 2021
    Inventors: David Thomas ROACH, Jean-Marc JOT, Jung-Suk LEE
  • Patent number: 11122383
    Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: September 14, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
  • Publication number: 20210243546
    Abstract: Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment.
    Type: Application
    Filed: February 12, 2021
    Publication date: August 5, 2021
    Inventors: Remi Samuel AUDFRAY, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20210195360
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 24, 2021
    Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
  • Publication number: 20210185471
    Abstract: Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
    Type: Application
    Filed: March 2, 2021
    Publication date: June 17, 2021
    Inventors: Jean-Marc JOT, Michael MINNICK, Dmitry PASTOUCHENKO, Michael Aaron SIMON, John Emmitt SCOTT, III, Richard St. Clair BAILEY, Shivakumar BALASUBRAMANYAM, Harsharaj AGADI
  • Publication number: 20210176588
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors configured to execute a method. A method for execution by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 10, 2021
    Applicants: Magic Leap, Inc., Magic Leap, Inc.
    Inventors: Remi Samuel AUDFRAY, Mark Brandon HERTENSTEINER, Samuel Charles DICKER, Blaine Ivin WOOD, Michael Z. LAND, Jean-Marc JOT
  • Publication number: 20210160616
    Abstract: A method of processing an audio signal is disclosed. According to embodiments of the method, magnitude response information of a prototype filter is determined. The magnitude response information includes a plurality of gain values, at least one of which includes a first gain corresponding to a first frequency. The magnitude response information of the prototype filter is stored. The magnitude response information of the prototype filter at the first frequency is retrieved. Gains are computed for a plurality of control frequencies based on the retrieved magnitude response information of the prototype filter at the first frequency, and the computed gains are applied to the audio signal.
    Type: Application
    Filed: December 3, 2020
    Publication date: May 27, 2021
    Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker
  • Publication number: 20210160647
    Abstract: A method of presenting an audio signal to a user of a mixed reality environment is disclosed. According to examples of the method, an audio event associated with the mixed reality environment is detected. The audio event is associated with a first audio signal. A location of the user with respect to the mixed reality environment is determined. An acoustic region associated with the location of the user is identified. A first acoustic parameter associated with the first acoustic region is determined. A transfer function is determined using the first acoustic parameter. The transfer function is applied to the first audio signal to produce a second audio signal, which is then presented to the user.
    Type: Application
    Filed: November 4, 2020
    Publication date: May 27, 2021
    Inventors: Brian Lloyd SCHMIDT, Jehangir Tajik, Jean-Marc Jot
  • Publication number: 20210160650
    Abstract: A system and method for providing low interaural coherence at low frequencies is disclosed. In some embodiments, the system may include a reverberator and a low-frequency interaural coherence control system. The reverberator may include two sets of comb filters, one for the left ear output signal and one for the right ear output signal. The low-frequency interaural coherence control system can include a plurality of sections, each section can be configured to control a certain frequency range of the signals that propagate through the given section. The sections may include a left high-frequency section for the left ear output signal and a right high-frequency section for the right ear output signal. The sections may also include a shared low-frequency section that can output signals to be combined by combiners of the left and right high-frequency sections.
    Type: Application
    Filed: October 6, 2020
    Publication date: May 27, 2021
    Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT