Patents Assigned to Lucasfilm
-
Publication number: 20220005279Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: ApplicationFiled: September 22, 2021Publication date: January 6, 2022Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
-
Publication number: 20210407174Abstract: A method of rendering an image includes receiving information of a virtual camera, including a camera position and a camera orientation defining a virtual screen; receiving information of a target screen, including a target screen position and a target screen orientation defining a plurality of pixels, each respective pixel corresponding to a respective UV coordinate on the target screen; for each respective pixel of the target screen: determining a respective XY coordinate of a corresponding point on the virtual screen based on the camera position, the camera orientation, the target screen position, the target screen orientation, and the respective UV coordinate; tracing one or more rays from the virtual camera through the corresponding point on the virtual screen toward a virtual scene; and estimating a respective color value for the respective pixel based on incoming light from virtual objects in the virtual scene that intersect the one or more rays.Type: ApplicationFiled: June 30, 2020Publication date: December 30, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Nicholas Walker, David Weitzberg, André Mazzone
-
Publication number: 20210407199Abstract: A method of edge loop selection includes accessing a polygon mesh; receiving a selection of a first edge connected to a first non-four-way intersection vertex; receiving, after receiving the selection of the first edge, a selection of a second edge connected to the first non-four-way intersection vertex; in response to receiving a command invoking an edge loop selection process: evaluating a topological relationship between the first edge and the second edge; determining a rule for processing a non-four-way intersection vertex based on the topological relationship between the first edge and the second edge; and completing an edge loop by, from the second edge, processing each respective four-way intersection vertex by choosing a middle edge as a next edge at the respective four-way intersection vertex, and processing each respective non-four-way intersection vertex based on the rule.Type: ApplicationFiled: July 16, 2020Publication date: December 30, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventor: Colette Mullenhoff
-
Patent number: 11200752Abstract: In at least one embodiment, an immersive content generation system may receive a first user input that defines a three-dimensional (3D) volume within a performance area. In at least one embodiment, the system may capture a plurality of images of an object in the performance area using a camera, wherein the object is at least partially surrounded by one or more displays presenting images of a virtual environment. In at least one embodiment, the system may receive a second user input to adjust a color value of a virtual image of the object as displayed in the images in the virtual environment. In at least one embodiment, the system may perform a color correction pass for the displayed images of the virtual environment. In at least one embodiment, the system may generate content based on the plurality of captured images that are corrected via the color correction pass.Type: GrantFiled: August 21, 2020Date of Patent: December 14, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11170571Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: GrantFiled: November 15, 2019Date of Patent: November 9, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Publication number: 20210342971Abstract: A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.Type: ApplicationFiled: April 13, 2021Publication date: November 4, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Douglas G. Watkins, Paige M. Warner, Dacklin R. Young
-
Patent number: 11145125Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: October 12, 2021Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11132838Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Richard Bluff, Lutz Latta
-
Patent number: 11132837Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
-
Patent number: 11128984Abstract: A method may include causing first content to be displayed on a display device; causing second content to be rendered irrespective of a location of a mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content. When the second content has moved a predetermined distance from the screen, The method may also include causing the second content to be rendered based on the location of the mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content.Type: GrantFiled: October 25, 2019Date of Patent: September 21, 2021Assignee: LUCASFILM ENIERTAINMENT COMPANY LTD.Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
-
Patent number: 11113885Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: September 7, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11107195Abstract: An immersive content production system may capture a plurality of images of a physical object in a performance area using a taking camera. The system may determine an orientation and a velocity of the taking camera with respect to the physical object in the performance area. A user may select a first amount of motion blur exhibited by the images of the physical object based on a desired motion effects. The system may determine a correction to apply to a virtual object based at least in part on the orientation and the velocity of the taking camera and the desired motion blur effect. The system may also detect the distance from the taking camera to a physical object and the taking camera to the virtual display. The system may use these distances to generate a corrected circle of confusion for the virtual images on the display.Type: GrantFiled: August 21, 2020Date of Patent: August 31, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11099654Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.Type: GrantFiled: April 17, 2020Date of Patent: August 24, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: 11087738Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E sound mix using at least the calculated content feature data.Type: GrantFiled: June 11, 2019Date of Patent: August 10, 2021Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: Scott Levine, Stephen Morris
-
Patent number: 11069135Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.Type: GrantFiled: November 12, 2019Date of Patent: July 20, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
-
Patent number: 11049332Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; andType: GrantFiled: March 3, 2020Date of Patent: June 29, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
-
Patent number: 11039083Abstract: Embodiments can enable motion capture cameras to be optimally placed in a set. For achieving this, a virtual set can be generated based on information regarding the set. Movement of a virtual actor or a virtual object may be controlled in the virtual set to simulate movement of the corresponding real actor and real object in the set. Based on such movement, camera aspects and obstructions in the set can be determined. Based on this determination, indication information indicating whether regions in the set may be viewable by one or more cameras placed in the physical set may be obtained. Based on the indication information, it can be determined an optimal placement of the motion capture cameras in the set. In some embodiments, an interface may be provided to show whether the markers attached to the actor can be captured by the motion capture cameras placed in a specific configuration.Type: GrantFiled: January 24, 2017Date of Patent: June 15, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Levin, Mincho Marinov, Brian Cantwell
-
Patent number: 11030810Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: June 8, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Publication number: 20210150810Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Patent number: 10964083Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor configured to execute the instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each of the representations includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.Type: GrantFiled: April 10, 2019Date of Patent: March 30, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan