Patents by Inventor Justin Dan MATHEW
Justin Dan MATHEW has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240414492Abstract: This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.Type: ApplicationFiled: October 18, 2022Publication date: December 12, 2024Inventors: Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Remi Samuel AUDFRAY, Jean-Marc JOT, Benjamin Thomas VONDERSAAR, Michael Z. LAND
-
Publication number: 20240412748Abstract: To facilitate communicating payload data to a destination, a computing system varies audio-rendering-directive metadata over time in a manner that represents the payload data and outputs the varied audio-rendering-directive metadata over time for communication along with an audio stream to the destination to facilitate rendering of the audio stream at the destination in accordance with the varied audio-rendering-directive metadata over time. The rendering of the audio stream at the destination conveys the payload data by being in accordance with the varied audio-rendering-directive metadata over time that represents the payload data. Thus, an audio meter operating at the destination could detect associated variation in the rendered audio stream and map that detected variation in the rendered audio stream to the payload data, thereby receiving the payload data.Type: ApplicationFiled: June 7, 2023Publication date: December 12, 2024Inventors: Alexander Topchy, Justin Dan Mathew, Jeremey M. Davis, Vladimir Kuznetsov
-
Publication number: 20240414490Abstract: To facilitate communicating payload data to a destination, a computing system varies audio-rendering-directive metadata over time in a manner that represents the payload data and outputs the varied audio-rendering-directive metadata over time for communication along with an audio stream to the destination to facilitate rendering of the audio stream at the destination in accordance with the varied audio-rendering-directive metadata over time. The rendering of the audio stream at the destination conveys the payload data by being in accordance with the varied audio-rendering-directive metadata over time that represents the payload data. Thus, an audio meter operating at the destination could detect associated variation in the rendered audio stream and map that detected variation in the rendered audio stream to the payload data, thereby receiving the payload data.Type: ApplicationFiled: June 7, 2023Publication date: December 12, 2024Inventors: Alexander Topchy, Justin Dan Mathew, Jeremey M. Davis, Vladimir Kuznetsov
-
Publication number: 20240357311Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.Type: ApplicationFiled: July 1, 2024Publication date: October 24, 2024Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LAMARTINA
-
Publication number: 20240284138Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: April 29, 2024Publication date: August 22, 2024Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
-
Patent number: 12063497Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.Type: GrantFiled: August 17, 2023Date of Patent: August 13, 2024Assignee: Magic Leap, Inc.Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
-
Patent number: 12003953Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: GrantFiled: March 10, 2023Date of Patent: June 4, 2024Assignee: Magic Leap, Inc.Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
-
Publication number: 20230396947Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.Type: ApplicationFiled: August 17, 2023Publication date: December 7, 2023Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA
-
Patent number: 11778411Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.Type: GrantFiled: December 2, 2022Date of Patent: October 3, 2023Assignee: Magic Leap, Inc.Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
-
Publication number: 20230217205Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: March 10, 2023Publication date: July 6, 2023Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
-
Patent number: 11632646Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: GrantFiled: April 12, 2022Date of Patent: April 18, 2023Assignee: Magic Leap, Inc.Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
-
Publication number: 20230094733Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.Type: ApplicationFiled: December 2, 2022Publication date: March 30, 2023Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA
-
Patent number: 11546716Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.Type: GrantFiled: August 12, 2021Date of Patent: January 3, 2023Assignee: Magic Leap, Inc.Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
-
Publication number: 20220240044Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: April 12, 2022Publication date: July 28, 2022Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
-
Patent number: 11337023Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: GrantFiled: December 18, 2020Date of Patent: May 17, 2022Assignee: Magic Leap, Inc.Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
-
Publication number: 20220038840Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.Type: ApplicationFiled: August 12, 2021Publication date: February 3, 2022Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA
-
Patent number: 11122383Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.Type: GrantFiled: October 4, 2019Date of Patent: September 14, 2021Assignee: Magic Leap, Inc.Inventors: Remi Samuel Audfray, Jean-Marc Jot, Samuel Charles Dicker, Mark Brandon Hertensteiner, Justin Dan Mathew, Anastasia Andreyevna Tajik, Nicholas John LaMartina
-
Publication number: 20210195360Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.Type: ApplicationFiled: December 18, 2020Publication date: June 24, 2021Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
-
Publication number: 20200112815Abstract: Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. An acoustic axis corresponding to the audio signal is determined. For each of a respective left and right ear of the user, an angle between the acoustic axis and the respective ear is determined. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. The virtual speaker array includes a plurality of virtual speaker positions, each virtual speaker position of the plurality located on the surface of a sphere concentric with the user's head, the sphere having a first radius.Type: ApplicationFiled: October 4, 2019Publication date: April 9, 2020Inventors: Remi Samuel AUDFRAY, Jean-Marc JOT, Samuel Charles DICKER, Mark Brandon HERTENSTEINER, Justin Dan MATHEW, Anastasia Andreyevna TAJIK, Nicholas John LaMARTINA