Patents Assigned to Lucasfilm Entertainment Company Ltd.
  • Patent number: 11087738
    Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E sound mix using at least the calculated content feature data.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 10, 2021
    Assignee: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Scott Levine, Stephen Morris
  • Patent number: 11069135
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: July 20, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
  • Patent number: 11049332
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: June 29, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
  • Patent number: 11039083
    Abstract: Embodiments can enable motion capture cameras to be optimally placed in a set. For achieving this, a virtual set can be generated based on information regarding the set. Movement of a virtual actor or a virtual object may be controlled in the virtual set to simulate movement of the corresponding real actor and real object in the set. Based on such movement, camera aspects and obstructions in the set can be determined. Based on this determination, indication information indicating whether regions in the set may be viewable by one or more cameras placed in the physical set may be obtained. Based on the indication information, it can be determined an optimal placement of the motion capture cameras in the set. In some embodiments, an interface may be provided to show whether the markers attached to the actor can be captured by the motion capture cameras placed in a specific configuration.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: June 15, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: John Levin, Mincho Marinov, Brian Cantwell
  • Patent number: 11030810
    Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: June 8, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Roger Cordes, David Brickhill
  • Publication number: 20210150810
    Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Matthew Cong, Ronald Fedkiw, Lana Lan
  • Patent number: 10964083
    Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor configured to execute the instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each of the representations includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: March 30, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan
  • Patent number: 10928995
    Abstract: Systems, devices, and methods are disclosed for UV packing. The system includes a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium stores instructions that, when executed, cause the processors to perform a number of operations. One operation is to present a packing map using a graphical user interface including a selection tool. Another operation is to present a first set of one or more target objects using the graphical user interface. Individual ones of the first set include one or more features. One operation is to receive a first user input. Another operation is to, based on the first user input and the one or more features corresponding to the individual ones of the first set, pack the first set into a packing map.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: February 23, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Colette Mullenhoff, Benjamin Neall
  • Publication number: 20200394999
    Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E sound mix using at least the calculated content feature data.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 17, 2020
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Scott Levine, Stephen Morris
  • Patent number: 10846920
    Abstract: Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow on a surface in the physical environment, the created shadow corresponding to the virtual object displayed by the MR display. In some implementations, the method includes: loading in a memory of the MR display system, a first version of the scene including the virtual object; and loading in a memory of the projector display system a second version of the scene including a virtual surface onto which the virtual object casts a shadow.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: November 24, 2020
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Michael Koperwas, Lutz Latta
  • Patent number: 10825225
    Abstract: Some implementations of the disclosure are directed to a pipeline that enables real time engines such as gaming engines to leverage high quality simulations generated offline via film grade simulation systems. In one implementation, a method includes: obtaining simulation data and skeletal mesh data of a character, the simulation data and skeletal mesh data including the character in the same rest pose; importing the skeletal mesh data into a real-time rendering engine; and using at least the simulation data and the imported skeletal mesh data to derive from the simulation data a transformed simulation vertex cache that is usable by the real-time rendering engine during runtime to be skinned in place of the rest pose.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 3, 2020
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Ronald Radeztsky, Michael Koperwas
  • Patent number: 10812693
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: October 20, 2020
    Assignee: LucasFilm Entertainment Company Ltd.
    Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Patent number: 10796489
    Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: October 6, 2020
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Roger Cordes, David Brickhill
  • Publication number: 20200286301
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and
    Type: Application
    Filed: March 3, 2020
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
  • Publication number: 20200286284
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.
    Type: Application
    Filed: November 12, 2019
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
  • Publication number: 20200288050
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Patent number: 10762599
    Abstract: A method is described that includes receiving, from a first device, input used to select a first object in a computer-generated environment. The first device has at least two degrees of freedom with which to control the selection of the first object. The method also includes removing, in response to the selection of the first object, at least two degrees of freedom previously available to a second device used to manipulating a second object in the computer-generated environment. The removed degrees of freedom correspond to the at least two degrees of freedom of the first device and specify an orientation of the second object relative to the selected first object. Additionally, the method includes receiving, from the second device, input including movements within the reduced degrees of freedom used to manipulate a position of the second object while maintaining the specified orientation relative to the selected first object.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: September 1, 2020
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventor: Steve Sullivan
  • Publication number: 20200265638
    Abstract: Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow on a surface in the physical environment, the created shadow corresponding to the virtual object displayed by the MR display. In some implementations, the method includes: loading in a memory of the MR display system, a first version of the scene including the virtual object; and loading in a memory of the projector display system a second version of the scene including a virtual surface onto which the virtual object casts a shadow.
    Type: Application
    Filed: February 20, 2019
    Publication date: August 20, 2020
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Michael Koperwas, Lutz Latta
  • Publication number: 20200249765
    Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.
    Type: Application
    Filed: April 17, 2020
    Publication date: August 6, 2020
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Darby Johnston, Ian Wakelin
  • Patent number: D910738
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: February 16, 2021
    Assignee: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: John M. Levin, Leandro F. Estebecorena, Paige Warner