Patents by Inventor Mark E. Drummond

Mark E. Drummond has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240094815
    Abstract: In one implementation, a method for recording an XR environment. The method includes: presenting, via the display device, a graphical environment with one or more virtual agents, wherein the graphical environment corresponds to a composition of extended reality (XR) content, including the one or more virtual agents, and an image stream of a physical environment captured from a first point-of-view (POV) of the physical environment; detecting, via the one or more input devices, a user input selecting a first virtual agent from among the one or more virtual agents; and in response to detecting the user input, recording a plurality of data streams associated with the graphical environment including a first image stream of the graphical environment from the first POV and one or more data streams of the graphical environment from a current POV of the first virtual agent.
    Type: Application
    Filed: November 29, 2023
    Publication date: March 21, 2024
    Inventors: Michael J. Gutensohn, Payal Jotwani, Mark E. Drummond, Daniel L. Kovacs
  • Publication number: 20240045501
    Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
  • Patent number: 11868526
    Abstract: In one implementation, a method for recording an XR environment. The method includes: presenting, via the display device, a graphical environment with one or more virtual agents, wherein the graphical environment corresponds to a composition of extended reality (XR) content, including the one or more virtual agents, and an image stream of a physical environment captured from a first point-of-view (POV) of the physical environment; detecting, via the one or more input devices, a user input selecting a first virtual agent from among the one or more virtual agents; and in response to detecting the user input, recording a plurality of data streams associated with the graphical environment including a first image stream of the graphical environment from the first POV and one or more data streams of the graphical environment from a current POV of the first virtual agent.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: January 9, 2024
    Assignee: APPLE INC.
    Inventors: Michael J. Gutensohn, Payal Jotwani, Mark E. Drummond, Daniel L. Kovacs
  • Patent number: 11869144
    Abstract: In some implementations, a device includes one or more sensors, one or more processors and a non-transitory memory. In some implementations, a method includes determining that a first portion of a physical environment is associated with a first saliency value and a second portion of the physical environment is associated with a second saliency value that is different from the first saliency value. In some implementations, the method includes obtaining, via the one or more sensors, environmental data corresponding to the physical environment. In some implementations, the method includes generating, based on the environmental data, a model of the physical environment by modeling the first portion with a first set of modeling features that is a function of the first saliency value and modeling the second portion with a second set of modeling features that is a function of the second saliency value.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 9, 2024
    Assignee: APPLE INC.
    Inventors: Payal Jotwani, Bo Morgan, Behrooz Mahasseni, Bradley W. Peebler, Dan Feng, Mark E. Drummond, Siva Chandra Mouli Sivapurapu
  • Patent number: 11822716
    Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: November 21, 2023
    Assignee: APPLE INC.
    Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
  • Publication number: 20230350536
    Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting a point-of-view (POV) for displaying an environment. In some implementations, a device includes a display, one or more processors, and a non-transitory memory. In some implementations, a method includes obtaining a request to display a graphical environment. The graphical environment is associated with a set of saliency values corresponding to respective portions of the graphical environment. A POV for displaying the graphical environment is selected based on the set of saliency values. The graphical environment is displayed from the selected POV on the display.
    Type: Application
    Filed: February 22, 2023
    Publication date: November 2, 2023
    Inventors: Dan Feng, Aashi Manglik, Adam M. O'Hern, Bo Morgan, Bradley W. Peebler, Daniel L. Kovacs, Edward Ahn, James Moll, Mark E. Drummond, Michelle Chua, Mu Qiao, Noah Gamboa, Payal Jotwani, Siva Chandra Mouli Sivapurapu
  • Patent number: 11699270
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: July 11, 2023
    Assignee: APPLE INC.
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Publication number: 20230089049
    Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes scanning a first physical environment to detect a first physical object in the first physical environment and a second physical object in the first physical environment, wherein the first physical object meets at least one first object criterion and the second physical object meets at least one second object criterion. The method includes displaying, in association with the first physical environment, a virtual object moving along a first path from the first physical object to the second physical object.
    Type: Application
    Filed: June 29, 2022
    Publication date: March 23, 2023
    Inventors: Mark E. Drummond, Daniel L. Kovacs, Shaun D. Budhram, Edward Ahn, Behrooz Mahasseni, Aashi Manglik, Payal Jotwani, Mu Qiao, Bo Morgan, Noah Gamboa, Michael J. Gutensohn, Dan Feng, Siva Chandra Mouli Sivapurapu
  • Publication number: 20230026511
    Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.
    Type: Application
    Filed: June 24, 2022
    Publication date: January 26, 2023
    Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
  • Publication number: 20220350401
    Abstract: In one implementation, a method for recording an XR environment. The method includes: presenting, via the display device, a graphical environment with one or more virtual agents, wherein the graphical environment corresponds to a composition of extended reality (XR) content, including the one or more virtual agents, and an image stream of a physical environment captured from a first point-of-view (POV) of the physical environment; detecting, via the one or more input devices, a user input selecting a first virtual agent from among the one or more virtual agents; and in response to detecting the user input, recording a plurality of data streams associated with the graphical environment including a first image stream of the graphical environment from the first POV and one or more data streams of the graphical environment from a current POV of the first virtual agent.
    Type: Application
    Filed: March 24, 2022
    Publication date: November 3, 2022
    Inventors: Michael J. Gutensohn, Payal Jotwani, Mark E. Drummond, Daniel L. Kovacs
  • Publication number: 20220270335
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Application
    Filed: May 9, 2022
    Publication date: August 25, 2022
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Patent number: 11373377
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: June 28, 2022
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Publication number: 20210201108
    Abstract: In one implementation, a method of generating an environment state is performed by a device including one or more processors and non-transitory memory. The method includes obtaining a first environment state of an environment, wherein the first environment state indicates the inclusion in the environment of a first asset associated with a first timescale value and a second asset associated with a second timescale value, wherein the first environment state further indicates that the first asset has a first state of the first asset and the second asset has a first state of the second asset. The method includes determining a second state of the first asset and the second asset based on the first and second timescale value. The method includes determining a second environment state that indicates that the first asset has the second state and the second asset has the second state.
    Type: Application
    Filed: March 16, 2021
    Publication date: July 1, 2021
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu
  • Publication number: 20210201594
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Application
    Filed: March 16, 2021
    Publication date: July 1, 2021
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Publication number: 20110270678
    Abstract: As the result of a keyword search, real time and social news stream Web search results are retrieved and analyzed to build a topic model of n-grams. The n-grams of the topic model are treated as ad-based keywords to determine advertisements to be displayed in conjunction with the real time Web search results. The real time Web search results and the advertisements are then be presented or displayed for user consumption or review.
    Type: Application
    Filed: May 2, 2011
    Publication date: November 3, 2011
    Inventors: Mark E. Drummond, David B. Hills, Susan M. Doherty, William York, Boris Agapiev, Nikola Todorovic, Aleksandar Ilic, Jonathan Ewert, Stephanie Fulqui, Steven T. Jurvetson, Stephanie A. Sarka