Patents by Inventor Bo MORGAN
Bo MORGAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961191Abstract: In some implementations, a method includes obtaining a semantic construction of a physical environment. In some implementations, the semantic construction of the physical environment includes a representation of a physical element and a semantic label for the physical element. In some implementations, the method includes obtaining a graphical representation of the physical element. In some implementations, the method includes synthesizing a perceptual property vector (PPV) for the graphical representation of the physical element based on the semantic label for the physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes compositing an affordance in association with the graphical representation of the physical element.Type: GrantFiled: September 2, 2021Date of Patent: April 16, 2024Assignee: APPLE INC.Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Publication number: 20240045501Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
-
Patent number: 11893207Abstract: In some implementations, a method includes obtaining environmental data corresponding to a physical environment. In some implementations, the method includes determining, based on the environmental data, a bounding surface of the physical environment. In some implementations, the method includes detecting a physical element located within the physical environment based on the environmental data. In some implementations, the method includes determining a semantic label for the physical element based on at least a portion of the environmental data corresponding to the physical element. In some implementations, the method includes generating a semantic construction of the physical environment based on the environmental data. In some implementations, the semantic construction of the physical environment includes a representation of the bounding surface, a representation of the physical element and the semantic label for the physical element.Type: GrantFiled: September 14, 2021Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Publication number: 20240038228Abstract: In some implementations, a method includes displaying, on a display, an environment that includes a representation of a virtual agent that is associated with a sensory characteristic. In some implementations, the method includes selecting, based on the sensory characteristic associated with the virtual agent, a subset of a plurality of sensors to provide sensor data for the virtual agent. In some implementations, the method includes providing the sensor data captured by the subset of the plurality of sensors to the virtual agent in order to reduce power consumption of the device. In some implementations, the method includes displaying a manipulation of the representation of the virtual agent based on an interpretation of the sensor data by the virtual agent.Type: ApplicationFiled: July 26, 2023Publication date: February 1, 2024Inventors: Dan Feng, Behrooz Mahasseni, Bo Morgan, Daniel L. Kovacs, Mu Qiao
-
Patent number: 11869144Abstract: In some implementations, a device includes one or more sensors, one or more processors and a non-transitory memory. In some implementations, a method includes determining that a first portion of a physical environment is associated with a first saliency value and a second portion of the physical environment is associated with a second saliency value that is different from the first saliency value. In some implementations, the method includes obtaining, via the one or more sensors, environmental data corresponding to the physical environment. In some implementations, the method includes generating, based on the environmental data, a model of the physical environment by modeling the first portion with a first set of modeling features that is a function of the first saliency value and modeling the second portion with a second set of modeling features that is a function of the second saliency value.Type: GrantFiled: February 23, 2022Date of Patent: January 9, 2024Assignee: APPLE INC.Inventors: Payal Jotwani, Bo Morgan, Behrooz Mahasseni, Bradley W. Peebler, Dan Feng, Mark E. Drummond, Siva Chandra Mouli Sivapurapu
-
Patent number: 11822716Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.Type: GrantFiled: June 24, 2022Date of Patent: November 21, 2023Assignee: APPLE INC.Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
-
Publication number: 20230350536Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting a point-of-view (POV) for displaying an environment. In some implementations, a device includes a display, one or more processors, and a non-transitory memory. In some implementations, a method includes obtaining a request to display a graphical environment. The graphical environment is associated with a set of saliency values corresponding to respective portions of the graphical environment. A POV for displaying the graphical environment is selected based on the set of saliency values. The graphical environment is displayed from the selected POV on the display.Type: ApplicationFiled: February 22, 2023Publication date: November 2, 2023Inventors: Dan Feng, Aashi Manglik, Adam M. O'Hern, Bo Morgan, Bradley W. Peebler, Daniel L. Kovacs, Edward Ahn, James Moll, Mark E. Drummond, Michelle Chua, Mu Qiao, Noah Gamboa, Payal Jotwani, Siva Chandra Mouli Sivapurapu
-
Patent number: 11699270Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.Type: GrantFiled: May 9, 2022Date of Patent: July 11, 2023Assignee: APPLE INC.Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
-
Publication number: 20230089049Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes scanning a first physical environment to detect a first physical object in the first physical environment and a second physical object in the first physical environment, wherein the first physical object meets at least one first object criterion and the second physical object meets at least one second object criterion. The method includes displaying, in association with the first physical environment, a virtual object moving along a first path from the first physical object to the second physical object.Type: ApplicationFiled: June 29, 2022Publication date: March 23, 2023Inventors: Mark E. Drummond, Daniel L. Kovacs, Shaun D. Budhram, Edward Ahn, Behrooz Mahasseni, Aashi Manglik, Payal Jotwani, Mu Qiao, Bo Morgan, Noah Gamboa, Michael J. Gutensohn, Dan Feng, Siva Chandra Mouli Sivapurapu
-
Publication number: 20230026511Abstract: According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.Type: ApplicationFiled: June 24, 2022Publication date: January 26, 2023Inventors: Mu Qiao, Dan Feng, Bo Morgan, Mark E. Drummond
-
Patent number: 11436813Abstract: A method includes generating, in coordination with an emergent content engine, a first objective for a first objective-effectuator and a second objective for a second objective-effectuator instantiated in a computer-generated reality (CGR) environment. The first and second objectives are associated with a mutual plan. The method includes generating, based on characteristic values associated with the first and second objective-effectuators a first directive for the first objective-effectuator and a second directive for the second objective-effectuator. The first directive limits actions generated by the first objective-effectuator over a first set of time frames associated with the first objective and the second directive limits actions generated by the second objective-effectuator over a second set of time frames associated with the second objective.Type: GrantFiled: May 20, 2021Date of Patent: September 6, 2022Assignee: Apple Inc.Inventors: Mark Drummond, Siva Chandra Mouli Sivapurapu, Bo Morgan
-
Publication number: 20220270335Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.Type: ApplicationFiled: May 9, 2022Publication date: August 25, 2022Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
-
Publication number: 20220262081Abstract: In some implementations, a method includes obtaining an objective for a computer-generated reality (CGR) representation of an objective-effectuator. In some implementations, the objective is associated with a plurality of time frames. In some implementations, the method includes determining a plurality of candidate plans that satisfy the objective. In some implementations, the method includes selecting a first candidate plan of the plurality of candidate plans based on a selection criterion. In some implementations, the method includes effectuating the first candidate plan in order to satisfy the objective. In some implementations, the first candidate plan triggers the CGR representation of the objective-effectuator to perform a series of actions over the plurality of time frames associated with the objective.Type: ApplicationFiled: March 3, 2022Publication date: August 18, 2022Inventors: Mark Drummond, Siva Chandra Mouli Sivapurapu, Bo Morgan
-
Patent number: 11373377Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.Type: GrantFiled: March 16, 2021Date of Patent: June 28, 2022Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
-
Patent number: 11302080Abstract: In some implementations, a method includes obtaining an objective for a computer-generated reality (CGR) representation of an objective-effectuator. In some implementations, the objective is associated with a plurality of time frames. In some implementations, the method includes determining a plurality of candidate plans that satisfy the objective. In some implementations, the method includes selecting a first candidate plan of the plurality of candidate plans based on a selection criterion. In some implementations, the method includes effectuating the first candidate plan in order to satisfy the objective. In some implementations, the first candidate plan triggers the CGR representation of the objective-effectuator to perform a series of actions over the plurality of time frames associated with the objective.Type: GrantFiled: April 30, 2020Date of Patent: April 12, 2022Assignee: APPLE INC.Inventors: Mark Drummond, Siva Chandra Mouli Sivapurapu, Bo Morgan
-
Publication number: 20210407185Abstract: In some implementations, a method includes obtaining environmental data corresponding to a physical environment. In some implementations, the method includes determining, based on the environmental data, a bounding surface of the physical environment. In some implementations, the method includes detecting a physical element located within the physical environment based on the environmental data. In some implementations, the method includes determining a semantic label for the physical element based on at least a portion of the environmental data corresponding to the physical element. In some implementations, the method includes generating a semantic construction of the physical environment based on the environmental data. In some implementations, the semantic construction of the physical environment includes a representation of the bounding surface, a representation of the physical element and the semantic label for the physical element.Type: ApplicationFiled: September 14, 2021Publication date: December 30, 2021Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Publication number: 20210398359Abstract: In some implementations, a method includes obtaining a semantic construction of a physical environment. In some implementations, the semantic construction of the physical environment includes a representation of a physical element and a semantic label for the physical element. In some implementations, the method includes obtaining a graphical representation of the physical element. In some implementations, the method includes synthesizing a perceptual property vector (PPV) for the graphical representation of the physical element based on the semantic label for the physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes compositing an affordance in association with the graphical representation of the physical element.Type: ApplicationFiled: September 2, 2021Publication date: December 23, 2021Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Publication number: 20210398360Abstract: A method includes determining a first portion of state information that is accessible to a first agent instantiated in an environment. The method includes determining a second portion of the state information that is accessible to a second agent instantiated in the environment. The method includes generating a first set of actions for a representation of the first agent based on the first portion of the state information to satisfy a first objective of the first agent. The method includes generating a second set of actions for a representation of the second agent based on the second portion of the state information to satisfy a second objective of the second agent. The method includes modifying the representations of the first and second agents based on the first and second set of actions.Type: ApplicationFiled: September 2, 2021Publication date: December 23, 2021Inventors: Mark Drummond, Bo Morgan
-
Publication number: 20210398327Abstract: In some implementations, a method includes obtaining, by a virtual intelligent agent (VIA), a perceptual property vector (PPV) for a graphical representation of a physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes instantiating a graphical representation of the VIA in a graphical environment that includes the graphical representation of the physical element and an affordance that is associated with the graphical representation of the physical element. In some implementations, the method includes generating, by the VIA, an action for the graphical representation of the VIA based on the PPV. In some implementations, the method includes displaying a manipulation of the affordance by the graphical representation of the VIA in order to effectuate the action generated by the VIA.Type: ApplicationFiled: September 2, 2021Publication date: December 23, 2021Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Publication number: 20210374615Abstract: In one implementation, a method of generating environment states is performed by a device including one or more processors and non-transitory memory. The method includes displaying an environment including an asset associated with a neural network model and having a plurality of asset states. The method includes receiving a user input indicative of a training request. The method includes selecting, based on the user input, a training focus indicating one or more of the plurality of asset states. The method includes generating a set of training data including a plurality of training instances weighted according to the training focus. The method includes training the neural network model on the set of training data.Type: ApplicationFiled: August 9, 2021Publication date: December 2, 2021Inventors: Mark Drummond, Peter Meier, Bo Morgan, Cameron J. Dunn, Siva Chandra Mouli Sivapurapu