Patents by Inventor Bo MORGAN

Bo MORGAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210272381
    Abstract: A method includes generating, in coordination with an emergent content engine, a first objective for a first objective-effectuator and a second objective for a second objective-effectuator instantiated in a computer-generated reality (CGR) environment. The first and second objectives are associated with a mutual plan. The method includes generating, based on characteristic values associated with the first and second objective-effectuators a first directive for the first objective-effectuator and a second directive for the second objective-effectuator. The first directive limits actions generated by the first objective-effectuator over a first set of time frames associated with the first objective and the second directive limits actions generated by the second objective-effectuator over a second set of time frames associated with the second objective.
    Type: Application
    Filed: May 20, 2021
    Publication date: September 2, 2021
    Inventors: Mark Drummond, Siva Chandra Mouli Sivapurapu, Bo Morgan
  • Patent number: 11055930
    Abstract: A method includes generating, in coordination with an emergent content engine, a first objective for a first objective-effectuator and a second objective for a second objective-effectuator instantiated in a computer-generated reality (CGR) environment. The first and second objectives are associated with a mutual plan. The method includes generating, based on characteristic values associated with the first and second objective-effectuators a first directive for the first objective-effectuator and a second directive for the second objective-effectuator. The first directive limits actions generated by the first objective-effectuator over a first set of time frames associated with the first objective and the second directive limits actions generated by the second objective-effectuator over a second set of time frames associated with the second objective.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 6, 2021
    Assignee: APPLE INC.
    Inventors: Mark Drummond, Siva Chandra Mouli Sivapurapu, Bo Morgan
  • Publication number: 20210201594
    Abstract: In various implementations, a device surveys a scene and presents, within the scene, a extended reality (XR) environment including one or more assets that evolve over time (e.g., change location or age). Modeling such an XR environment at various timescales can be computationally intensive, particularly when modeling the XR environment over larger timescales. Accordingly, in various implementations, different models are used to determine the environment state of the XR environment when presenting the XR environment at different timescales.
    Type: Application
    Filed: March 16, 2021
    Publication date: July 1, 2021
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu, Ian M. Richter
  • Publication number: 20210201108
    Abstract: In one implementation, a method of generating an environment state is performed by a device including one or more processors and non-transitory memory. The method includes obtaining a first environment state of an environment, wherein the first environment state indicates the inclusion in the environment of a first asset associated with a first timescale value and a second asset associated with a second timescale value, wherein the first environment state further indicates that the first asset has a first state of the first asset and the second asset has a first state of the second asset. The method includes determining a second state of the first asset and the second asset based on the first and second timescale value. The method includes determining a second environment state that indicates that the first asset has the second state and the second asset has the second state.
    Type: Application
    Filed: March 16, 2021
    Publication date: July 1, 2021
    Inventors: Bo Morgan, Mark E. Drummond, Peter Meier, Cameron J. Dunn, John Christopher Russell, Siva Chandra Mouli Sivapurapu
  • Patent number: 10818063
    Abstract: Systems and methods for automatically animating a character based on an existing corpus of animation are described. The character may be from a previously produced feature animated film, and the data used for training may be the data used to animate the character in the film. A low-dimensional embedding for subsets of the existing animation corresponding to different semantic labels may be learned by mapping high-dimensional rig control parameters to a latent space. A particle model may be used to move within the latent space, thereby generating novel animations corresponding to the space's semantic label, such as a pose. Bridges may link a first pose of a first model within the latent space that is similar to a second pose of a second model of the space. Animations corresponding to transitions between semantic labels may be generated by creating animation paths that traverse a bridge from one model into another.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 27, 2020
    Assignee: DreamWorks Animation L.L.C.
    Inventors: Stephen Bailey, Martin Watt, Bo Morgan, James O'Brien
  • Publication number: 20190156548
    Abstract: Systems and methods for automatically animating a character based on an existing corpus of animation are described. The character may be from a previously produced feature animated film, and the data used for training may be the data used to animate the character in the film. A low-dimensional embedding for subsets of the existing animation corresponding to different semantic labels may be learned by mapping high-dimensional rig control parameters to a latent space. A particle model may be used to move within the latent space, thereby generating novel animations corresponding to the space's semantic label, such as a pose. Bridges may link a first pose of a first model within the latent space that is similar to a second pose of a second model of the space. Animations corresponding to transitions between semantic labels may be generated by creating animation paths that traverse a bridge from one model into another.
    Type: Application
    Filed: January 18, 2019
    Publication date: May 23, 2019
    Applicant: DreamWorks Animation L.L.C.
    Inventors: Stephen BAILEY, Martin WATT, Bo MORGAN, James O'BRIEN
  • Patent number: 10262448
    Abstract: Systems and methods for automatically animating a character based on an existing corpus of animation are described. The character may be from a previously produced feature animated film, and the data used for training may be the data used to animate the character in the film. A low-dimensional embedding for subsets of the existing animation corresponding to different semantic labels may be learned by mapping high-dimensional rig control parameters to a latent space. A particle model may be used to move within the latent space, thereby generating novel animations corresponding to the space's semantic label, such as a pose. Bridges may link a first pose of a first model within the latent space that is similar to a second pose of a second model of the space. Animations corresponding to transitions between semantic labels may be generated by creating animation paths that traverse a bridge from one model into another.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: April 16, 2019
    Assignee: DreamWorks Animation L.L.C.
    Inventors: Stephen Bailey, Martin Watt, Bo Morgan, James O'Brien
  • Publication number: 20170206696
    Abstract: Systems and methods for automatically animating a character based on an existing corpus of animation are described. The character may be from a previously produced feature animated film, and the data used for training may be the data used to animate the character in the film. A low-dimensional embedding for subsets of the existing animation corresponding to different semantic labels may be learned by mapping high-dimensional rig control parameters to a latent space. A particle model may be used to move within the latent space, thereby generating novel animations corresponding to the space's semantic label, such as a pose. Bridges may link a first pose of a first model within the latent space that is similar to a second pose of a second model of the space. Animations corresponding to transitions between semantic labels may be generated by creating animation paths that traverse a bridge from one model into another.
    Type: Application
    Filed: January 18, 2017
    Publication date: July 20, 2017
    Applicant: DreamWorks Animation LLC
    Inventors: Stephen BAILEY, Martin WATT, Bo MORGAN, James O'BRIEN