Patents by Inventor Michael Z. Land

Michael Z. Land has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11910183
    Abstract: Disclosed herein are systems and methods for efficiently rendering audio. A method may include receiving a request to present a first audio track, wherein the first audio track is based on a first audio model comprising a shared model component and a first model component; receiving a request to present a second audio track, wherein the second audio track is based on a second audio model comprising the shared model component and a second model component; rendering a sound based on the first audio track, the second audio track, the shared model component, the first model component, and the second model component; and presenting, via one or more speakers, the an audio signal comprising the rendered sound.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: February 20, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Mark Brandon Hertensteiner, Samuel Charles Dicker, Blaine Ivin Wood, Michael Z. Land
  • Publication number: 20230308801
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Application
    Filed: June 1, 2023
    Publication date: September 28, 2023
    Inventors: Brian Lloyd SCHMIDT, David Thomas ROACH, Michael Z. LAND, Richard D. HERR
  • Patent number: 11722812
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: August 8, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Brian Lloyd Schmidt, David Thomas Roach, Michael Z. Land, Richard D. Herr
  • Publication number: 20230239651
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Application
    Filed: March 8, 2023
    Publication date: July 27, 2023
    Inventors: Remi Samuel AUDFRAY, Mark Brandon HERTENSTEINER, Samuel Charles DICKER, Blaine Ivin WOOD, Michael Z. LAND, Jean-Marc JOT
  • Publication number: 20230217205
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: March 10, 2023
    Publication date: July 6, 2023
    Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
  • Patent number: 11632646
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: April 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Patent number: 11627430
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors configured to execute a method. A method for execution by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 11, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Remi Samuel Audfray, Mark Brandon Hertensteiner, Samuel Charles Dicker, Blaine Ivin Wood, Michael Z. Land, Jean-Marc Jot
  • Publication number: 20220240044
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: April 12, 2022
    Publication date: July 28, 2022
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Patent number: 11337023
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Colby Nelson Leider, Justin Dan Mathew, Michael Z. Land, Blaine Ivin Wood, Jung-Suk Lee, Anastasia Andreyevna Tajik, Jean-Marc Jot
  • Publication number: 20220109933
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Application
    Filed: November 2, 2021
    Publication date: April 7, 2022
    Inventors: Brian Lloyd Schmidt, David Thomas Roach, Michael Z. Land, Richard D. Herr
  • Patent number: 11190867
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: November 30, 2021
    Assignee: MAGIC LEAP, INC.
    Inventors: Brian Lloyd Schmidt, David Thomas Roach, Michael Z. Land, Richard D. Herr
  • Publication number: 20210258715
    Abstract: Disclosed herein are systems and methods for efficiently rendering audio. A method may include receiving a request to present a first audio track, wherein the first audio track is based on a first audio model comprising a shared model component and a first model component; receiving a request to present a second audio track, wherein the second audio track is based on a second audio model comprising the shared model component and a second model component; rendering a sound based on the first audio track, the second audio track, the shared model component, the first model component, and the second model component; and presenting, via one or more speakers, the an audio signal comprising the rendered sound.
    Type: Application
    Filed: February 11, 2021
    Publication date: August 19, 2021
    Inventors: Remi Samuel AUDFRAY, Mark Brandon HERTENSTEINER, Samuel Charles DICKER, Blaine Ivin WOOD, Michael Z. LAND
  • Publication number: 20210195360
    Abstract: Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 24, 2021
    Inventors: Colby Nelson LEIDER, Justin Dan MATHEW, Michael Z. LAND, Blaine Ivin WOOD, Jung-Suk LEE, Anastasia Andreyevna TAJIK, Jean-Marc JOT
  • Publication number: 20210176588
    Abstract: Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors configured to execute a method. A method for execution by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 10, 2021
    Applicants: Magic Leap, Inc., Magic Leap, Inc.
    Inventors: Remi Samuel AUDFRAY, Mark Brandon HERTENSTEINER, Samuel Charles DICKER, Blaine Ivin WOOD, Michael Z. LAND, Jean-Marc JOT
  • Publication number: 20190138573
    Abstract: In one embodiment, the present disclosure is directed to a method of editing a shared presentation over a network, the method including a network data server sending a first display for displaying and editing first presentation data, and sending a second display for displaying and editing second presentation data. The network data server updates the second display by adding a media object following receipt of a first message by the data server indicating that the media object was added to the first presentation data via the first display, the media object including video data or animated graphics data. Further, the network data server updates the first display by modifying playback behavior data of the media object following receipt of a second message by the data server indicating that the playback behavior data was specified for the media object via the second display.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 9, 2019
    Inventors: Michael Z. LAND, Peter N. McCONNELL, Michael J. McMAHON
  • Patent number: 10127944
    Abstract: A multimedia authoring and playback system and method in which the playback of multimedia content is presented in one or more windows or displays called “playback displays,” and in which additional windows or displays called “control displays” are included in some embodiments to provide various management and control functions. Included are features for creating, editing and distributing multimedia content, which may be viewed by recipients who play the content (and in some cases may be allowed to modify it); also included are features for programming playback behavior of multimedia content, interconnecting multimedia content, and exploring and navigating through multimedia content.
    Type: Grant
    Filed: February 17, 2010
    Date of Patent: November 13, 2018
    Assignee: RESOURCE CONSORTIUM LIMITED
    Inventors: Michael Z. Land, Peter N. McConnell, Michael J. McMahon
  • Publication number: 20180288518
    Abstract: A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 4, 2018
    Inventors: Brian Lloyd SCHMIDT, David ROACH, Michael Z. LAND, Richard D. HERR
  • Publication number: 20100146393
    Abstract: A multimedia authoring and playback system and method in which the playback of multimedia content is presented in one or more windows or displays called “playback displays,” and in which additional windows or displays called “control displays” are included in some embodiments to provide various management and control functions. Included are features for creating, editing and distributing multimedia content, which may be viewed by recipients who play the content (and in some cases may be allowed to modify it); also included are features for programming playback behavior of multimedia content, interconnecting multimedia content, and exploring and navigating through multimedia content.
    Type: Application
    Filed: February 17, 2010
    Publication date: June 10, 2010
    Applicant: SparkPoint Software, Inc.
    Inventors: Michael Z. Land, Peter N. McConnell, Michael J. McMahon
  • Patent number: 7155676
    Abstract: A system and method for authoring and playback of multimedia content together with sharing, interconnecting and navigating the content using a computer network is disclosed. Creation, presentation and sharing of multimedia content and applications takes place within an active authoring environment, where data is always “live,” and any piece of content can be played alongside any other piece of content at any time. In this environment, there are no formal delineations between one “presentation” and another based on such things as file boundaries, other data storage constructs or the like. Instead, any piece of content can potentially be part of the “current” presentation at any time simply by being “started.” As a result, three factors become critically important: (1) the framework in which content is organized, stored and distributed; (2) the control mechanisms by which content is played and presented, and (3) as with any authoring system, the methods and means by which users create and edit content.
    Type: Grant
    Filed: December 19, 2001
    Date of Patent: December 26, 2006
    Assignee: Coolernet
    Inventors: Michael Z. Land, Peter N. McConnell, Michael J. McMahon
  • Publication number: 20040039934
    Abstract: A multimedia authoring and playback system and method in which the playback of multimedia content is presented in one or more windows or displays called “playback displays,” and in which additional windows or displays called “control displays” are included in some embodiments to provide various management and control functions. Included are features for creating, editing and distributing multimedia content, which may be viewed by recipients who play the content (and in some cases may be allowed to modify it); also included are features for programming playback behavior of multimedia content, interconnecting multimedia content, and exploring and navigating through multimedia content.
    Type: Application
    Filed: December 18, 2002
    Publication date: February 26, 2004
    Inventors: Michael Z. Land, Peter N. McConnell, Michael J. McMahon