Lucasfilm Patents
Lucasfilm Ltd. produced the Star Wars and Indiana Jones motion pictures. The company was acquired by the Walt Disney Company in 2012.
Lucasfilm Patents by Type- Lucasfilm Patents Granted: Lucasfilm patents that have been granted by the United States Patent and Trademark Office (USPTO).
- Lucasfilm Patent Applications: Lucasfilm patent applications that are pending before the United States Patent and Trademark Office (USPTO).
-
Patent number: 12236533Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: June 13, 2023Date of Patent: February 25, 2025Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
-
Publication number: 20240420430Abstract: A computer-implemented method comprising: capturing, with a mobile device comprising a camera and a display, a first image of a physical environment and first set of telemetry data corresponding with the first image, wherein the first set of telemetry data comprises GPS location data along with orientation data; transmitting, to a server system via a wireless communication channel, the first set of telemetry data; determining, with the server system, a precise location and orientation of the mobile device in the physical environment based on the first set of telemetry data received from the mobile device and on information accessed from multiple sources of data including one or more of a street map database, visual positioning system (VPS) data, and image anchors; rendering an image of virtual content based at least in part on the first set of telemetry data for the mobile device, wherein the rendered image of virtual content is over rendered such that it represents a larger area in the virtual world than theType: ApplicationFiled: June 14, 2024Publication date: December 19, 2024Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Michael Koperwas, Earle M. Alexander, IV
-
Patent number: 12131419Abstract: A method of rendering an image includes receiving information of a virtual camera, including a camera position and a camera orientation defining a virtual screen; receiving information of a target screen, including a target screen position and a target screen orientation defining a plurality of pixels, each respective pixel corresponding to a respective UV coordinate on the target screen; for each respective pixel of the target screen: determining a respective XY coordinate of a corresponding point on the virtual screen based on the camera position, the camera orientation, the target screen position, the target screen orientation, and the respective UV coordinate; tracing one or more rays from the virtual camera through the corresponding point on the virtual screen toward a virtual scene; and estimating a respective color value for the respective pixel based on incoming light from virtual objects in the virtual scene that intersect the one or more rays.Type: GrantFiled: June 30, 2020Date of Patent: October 29, 2024Assignee: LUCASFILM ENTERTAINMENT COMPANY LTDInventors: Nicholas Walker, David Weitzberg, André Mazzone
-
Patent number: 12022357Abstract: A method includes causing background content to be displayed on a display device with a first virtual object and a second virtual object; causing augmented reality (AR) content to be rendered based on a location of an AR device relative to the display device; determining that the AR content is in front of the first virtual object in the scene when viewed through the AR device and rendering the background content with a cutout in the first virtual object when the first virtual object overlaps with the AR content; and determining that the AR content is behind the second virtual object in the scene when viewed through the AR device and rendering the AR content with a cutout in the AR content when the AR content overlaps with the second virtual object.Type: GrantFiled: September 10, 2021Date of Patent: June 25, 2024Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
-
Patent number: 11978154Abstract: In at least one embodiment, an immersive content generation system may receive a first input from a user indicating a lighting value. The computing device may receive a second input indicating a region of an immersive virtual environment to which the lighting value is to be applied. The computing device may apply the lighting value to the region of the immersive virtual environment. The computing device may output one or more images of the immersive virtual environment, the one or more images based, in part, on the input lighting value. Numerous other aspects are described.Type: GrantFiled: April 8, 2022Date of Patent: May 7, 2024Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Jeff Webster, Scott Richards
-
Patent number: 11948260Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: November 18, 2022Date of Patent: April 2, 2024Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Roger Cordes, David Brickhill
-
Publication number: 20240096035Abstract: A method of content production may include receiving tracking information for a camera with a frustum configured to capture images of a subject in an immersive environment. a first image of a virtual environment corresponding to the frustum may be rendered using a first rendering process based on the tracking information to be perspective-correct when displayed on the displays and viewed through the camera. A second image of the virtual environment may also be rendered using a second rendering process for a specific display. The first image and the second image may be rendered in parallel. The second image and a portion of the first image may be composited together to generate a composite image, where the portion of the first image may correspond to a portion of the display captured by the frustum.Type: ApplicationFiled: September 21, 2023Publication date: March 21, 2024Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Nicholas Rasmussen, Lutz Latta
-
Publication number: 20240038256Abstract: Some implementations of the disclosure relate to a non-transitory computer-readable medium having executable instructions stored thereon that, when executed by a processor, cause a system to perform operations comprising: obtaining a first energy-based target for audio; obtaining a first version of a sound mix including one or more audio components; computing, for each audio frame of multiple audio frames of each of the one or more audio components, a first audio feature measurement value; optimizing, based at least on the first energy-based target and the first audio feature measurement values, gain values of the audio frames; and after optimizing the gain values, applying the gain values to the first version of sound mix to obtain a second version of the sound mix.Type: ApplicationFiled: August 1, 2022Publication date: February 1, 2024Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: NICOLAS TSINGOS, SCOTT LEVINE
-
Patent number: 11887251Abstract: A computing device in communication with an immersive content generation system can generate and present images of a virtual environment on one or more light-emitting diode (LED) displays at least partially surrounding a performance area. The device may capture a plurality of images of a performer or a physical object in the performance area along with at least some portion of the images of the virtual environment by a taking camera. The device may identify a color mismatch between a portion of the performer or the physical object and a virtual image of the performer or the physical object in the images of the virtual environment. The device may generate a patch for the images of the virtual environment to correct the color mismatch. The device may insert the patch into the images of the virtual environment. Also, the device may generate content based on the plurality of captured images.Type: GrantFiled: April 8, 2022Date of Patent: January 30, 2024Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Alan Bucior
-
Publication number: 20230336679Abstract: A motion capture system comprising: a master clock configured to repeatedly generate and output, at a frame rate, a primary clock signal that conveys when a video frame starts; a first camera configured to capture light within a first set of wavelengths and operably coupled to receive the master clock signal and initiate an image capture sequence on a frame-by-frame basis in fixed phase relationship with the primary clock signal to generate a first set of images at the frame rate from light captured within the first set of wavelengths; a synchronization module operably coupled to receive the master clock signal from the master clock and configured to generate a synchronization signal offset in time from and in a fixed relationship with the primary clock signal; and a second camera configured to capture light within a second set of wavelengths, different than the first set of wavelengths, and operably coupled to receive the synchronization signal and initiate an image capture sequence on the frame-by-frame basType: ApplicationFiled: March 29, 2023Publication date: October 19, 2023Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Robert Derry, Gary P. Martinez, Brian Hook
-
Publication number: 20230326142Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
-
Patent number: 11783493Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: GrantFiled: November 5, 2021Date of Patent: October 10, 2023Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Publication number: 20230316587Abstract: A computer-implemented method of changing a face within an output image or video frame that includes: receiving an input image that includes a face presenting a facial expression in a pose; processing the image with a neural network encoder to generate a latent space point that is an encoded representation of the image; decoding the latent space point to generate an initial output image in accordance with a desired facial identity but with the facial expression and pose of the face in the input image; identifying a feature of the facial expression in the initial output image to edit; applying an adjustment vector to a latent space point corresponding to the initial output image to generate an adjusted latent space point; and decoding the adjusted latent space point to generate an adjusted output image in accordance with the desired facial identity but with the facial expression and pose of the face in the input image altered in accordance with the adjustment vectorType: ApplicationFiled: March 29, 2022Publication date: October 5, 2023Applicants: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC, DISNEY ENTERPRISES, INCInventors: Sirak Ghebremusse, Stéphane Grabli, Jacek Krzysztof Naruniec, Romann Matthew Weber, Christopher Richard Schroers
-
Patent number: 11762481Abstract: In some implementations, an apparatus may include a housing enclosing a circuitry may include a processor and a memory, the housing forming a handgrip. In addition, the apparatus may include a plurality of light sensors arranged in a particular configuration, each of the plurality of light sensors coupled to an exterior the housing via a sensor arm. Also, the apparatus may include one or more controls mounted on the exterior of the housing and electrically coupled to the circuitry. The apparatus can include one or more antenna mounted on an exterior of the housing; and a transmitter connected to the circuitry and electrically connected to the one or more antenna to send data from the apparatus via a wireless protocol. The apparatus can include an electronic device for mounting an electronic device to the housing, the electronic device configured to execute an application for an immersive content generation system.Type: GrantFiled: April 8, 2022Date of Patent: September 19, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Robert Derry, Gary Martinez
-
Patent number: 11756279Abstract: An immersive content generation system can capture a plurality of images of an object in the performance area using a camera. The object is at least partially surrounded by one or more displays presenting images of a virtual environment. The system can determine attributes of a lens of the taking camera, a first distance from the taking camera to the physical object, and a second distance between a LED display presenting a virtual image of the physical object and the taking camera. The system can determine the depth of field blur for the virtual objects based on the attributes of the taking camera and the virtual depth of each of the virtual objects. The system can apply the depth of field blur to the virtual objects and generate content based on the plurality of captured images and the determined amount of depth of field blur to the virtual objects.Type: GrantFiled: November 11, 2021Date of Patent: September 12, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11758243Abstract: A system includes a computing platform including processing hardware and a memory storing software code, a trained machine learning (ML) model, and a content thumbnail generator. The processing hardware executes the software code to receive interaction data describing interactions by a user with content thumbnails, identify, using the interaction data, an affinity by the user for at least one content thumbnail feature, and determine, using the interaction data, a predetermined business rule, or both, content for promotion to the user. The software code further provides a prediction, using the trained ML model and based on the affinity by the user, of the desirability of each of multiple candidate thumbnails for the content to the user, generates, using the content thumbnail generator and based on the prediction, a thumbnail having features of one or more of the candidate thumbnails, and displays the thumbnail to promote the content to the user.Type: GrantFiled: November 24, 2021Date of Patent: September 12, 2023Assignees: Disney Enterprises, Inc., LucasFilm Entertainment Company Ltd. LLC.Inventors: Alexander Niedt, Mara Idai Lucien, Juli Logemann, Miquel Angel Farre Guiu, Monica Alfaro Vendrell, Marc Junyent Martin
-
Patent number: 11748406Abstract: Some implementations of the disclosure relate to a method, comprising: obtaining, at a computing device, first video clip data including multiple sequential video frames, the multiple sequential video frames including at least a first video frame and a second video frame that occurs after the first video frame; inputting, at the computing device, the first video clip data into at least one trained model that automatically predicts, based on at least features of the first video frame and features of the second video frame, sound effect data corresponding to the second video frame; and determining, at the computing device, based on the sound effect data predicted for the second video frame, a first sound effect file corresponding to the second video frame.Type: GrantFiled: November 1, 2021Date of Patent: September 5, 2023Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: Nicolas Tsingos, Scott Levine, Stephen Morris
-
Patent number: 11727644Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: September 22, 2021Date of Patent: August 15, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
-
Patent number: 11726581Abstract: In some implementations, an apparatus may include a housing enclosing a circuitry may include a processor and a memory, the housing forming a handgrip. In addition, the apparatus may include a plurality of light sensors arranged in a particular configuration, each of the plurality of light sensors coupled to an exterior the housing via a sensor arm. Also, the apparatus may include one or more controls mounted on the exterior of the housing and electrically coupled to the circuitry. The apparatus can include one or more antenna mounted on an exterior of the housing; and a transmitter connected to the circuitry and electrically connected to the one or more antenna to send data from the apparatus via a wireless protocol. The apparatus can include an electronic device for mounting an electronic device to the housing, the electronic device configured to execute an application for an immersive content generation system.Type: GrantFiled: April 8, 2022Date of Patent: August 15, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Robert Derry, Gary Martinez
-
Patent number: 11671717Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: GrantFiled: May 20, 2020Date of Patent: June 6, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Publication number: 20230136632Abstract: Some implementations of the disclosure relate to a method, comprising: obtaining, at a computing device, first video clip data including multiple sequential video frames, the multiple sequential video frames including at least a first video frame and a second video frame that occurs after the first video frame; inputting, at the computing device, the first video clip data into at least one trained model that automatically predicts, based on at least features of the first video frame and features of the second video frame, sound effect data corresponding to the second video frame; and determining, at the computing device, based on the sound effect data predicted for the second video frame, a first sound effect file corresponding to the second video frame.Type: ApplicationFiled: November 1, 2021Publication date: May 4, 2023Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Nicolas Tsingos, Scott Levine, Stephen Morris
-
Patent number: 11581970Abstract: Some implementations of the disclosure relate to using a model trained on mixing console data of sound mixes to automate the process of sound mix creation. In one implementation, a non-transitory computer-readable medium has executable instructions stored thereon that, when executed by a processor, causes the processor to perform operations comprising: obtaining a first version of a sound mix; extracting first audio features from the first version of the sound mix obtaining mixing metadata; automatically calculating with a trained model, using at least the mixing metadata and the first audio features, mixing console features; and deriving a second version of the sound mix using at least the mixing console features calculated by the trained model.Type: GrantFiled: April 21, 2021Date of Patent: February 14, 2023Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: Stephen Morris, Scott Levine, Nicolas Tsingos
-
Patent number: 11580616Abstract: A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.Type: GrantFiled: April 13, 2021Date of Patent: February 14, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Douglas G. Watkins, Paige M. Warner, Dacklin R. Young
-
Patent number: 11532102Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: January 31, 2022Date of Patent: December 20, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 11514653Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: October 6, 2021Date of Patent: November 29, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11508125Abstract: Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie may be displayed using different formats and resolutions depending on which mode the movie is being presented.Type: GrantFiled: March 12, 2020Date of Patent: November 22, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Lutz Markus Latta, Ian Wakelin, Darby Johnston, Andrew Grant, John Gaeta
-
Publication number: 20220345234Abstract: Some implementations of the disclosure relate to using a model trained on mixing console data of sound mixes to automate the process of sound mix creation. In one implementation, a non-transitory computer-readable medium has executable instructions stored thereon that, when executed by a processor, causes the processor to perform operations comprising: obtaining a first version of a sound mix; extracting first audio features from the first version of the sound mix obtaining mixing metadata; automatically calculating with a trained model, using at least the mixing metadata and the first audio features, mixing console features; and deriving a second version of the sound mix using at least the mixing console features calculated by the trained model.Type: ApplicationFiled: April 21, 2021Publication date: October 27, 2022Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Stephen Morris, Scott Levine, Nicolas Tsingos
-
Publication number: 20220343562Abstract: In some implementations, a computing device in communication with an immersive content generation system may generate a first set of user interface elements configured to receive a first selection of a shape of a virtual stage light. In addition, the device may generate a second set of user interface elements configured to receive a second selection of an image for the virtual stage light. Also, the device may generate a third set of user interface elements configured to receive a third selection of a position and an orientation of the virtual stage light. Further, the generate a fourth set of user interface elements configured to receive a fourth selection of a color for the virtual stage light. Numerous other aspects are described.Type: ApplicationFiled: April 8, 2022Publication date: October 27, 2022Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.Inventors: David Hirschfield, Michael Jutan
-
Publication number: 20220342488Abstract: In some implementations, an apparatus may include a housing enclosing a circuitry may include a processor and a memory, the housing forming a handgrip. In addition, the apparatus may include a plurality of light sensors arranged in a particular configuration, each of the plurality of light sensors coupled to an exterior the housing via a sensor arm. Also, the apparatus may include one or more controls mounted on the exterior of the housing and electrically coupled to the circuitry. The apparatus can include one or more antenna mounted on an exterior of the housing; and a transmitter connected to the circuitry and electrically connected to the one or more antenna to send data from the apparatus via a wireless protocol. The apparatus can include an electronic device for mounting an electronic device to the housing, the electronic device configured to execute an application for an immersive content generation system.Type: ApplicationFiled: April 8, 2022Publication date: October 27, 2022Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Robert Derry, Gary Martinez
-
Publication number: 20220343591Abstract: A computing device in communication with an immersive content generation system can generate and present images of a virtual environment on one or more light-emitting diode (LED) displays at least partially surrounding a performance area. The device may capture a plurality of images of a performer or a physical object in the performance area along with at least some portion of the images of the virtual environment by a taking camera. The device may identify a color mismatch between a portion of the performer or the physical object and a virtual image of the performer or the physical object in the images of the virtual environment. The device may generate a patch for the images of the virtual environment to correct the color mismatch. The device may insert the patch into the images of the virtual environment. Also, the device may generate content based on the plurality of captured images.Type: ApplicationFiled: April 8, 2022Publication date: October 27, 2022Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Alan Bucior
-
Publication number: 20220343590Abstract: In at least one embodiment, an immersive content generation system may receive a first input from a user indicating a lighting value. The computing device may receive a second input indicating a region of an immersive virtual environment to which the lighting value is to be applied. The computing device may apply the lighting value to the region of the immersive virtual environment. The computing device may output one or more images of the immersive virtual environment, the one or more images based, in part, on the input lighting value. Numerous other aspects are described.Type: ApplicationFiled: April 8, 2022Publication date: October 27, 2022Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.Inventors: Michael Jutan, David Hirschfield, Jeff Webster, Scott Richards
-
Patent number: 11403769Abstract: A motion capture tracking device comprising a base portion including a first alignment feature, a first magnetic element and an attachment mechanism operative to mechanically couple the base portion to a rod, a detachable end cap configured to be removably mated with the base portion, and a plurality of motion capture markers coupled to the end cap. The detachable end cap can include a second alignment feature and a second magnetic element, such that, during a mating event in which the detachable end cap is coupled to the base portion, the second alignment feature cooperates with the first alignment feature to ensure that the base portion and detachable end cap are mated in accordance with a unique registration and the second magnetic feature cooperates with the first magnetic feature to magnetically retain the detachable end cap in physical contact with the base portion upon completion of the mating event.Type: GrantFiled: September 1, 2020Date of Patent: August 2, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventor: Paige M. Warner
-
Patent number: 11308693Abstract: A method of edge loop selection includes accessing a polygon mesh; receiving a selection of a first edge connected to a first non-four-way intersection vertex; receiving, after receiving the selection of the first edge, a selection of a second edge connected to the first non-four-way intersection vertex; in response to receiving a command invoking an edge loop selection process: evaluating a topological relationship between the first edge and the second edge; determining a rule for processing a non-four-way intersection vertex based on the topological relationship between the first edge and the second edge; and completing an edge loop by, from the second edge, processing each respective four-way intersection vertex by choosing a middle edge as a next edge at the respective four-way intersection vertex, and processing each respective non-four-way intersection vertex based on the rule.Type: GrantFiled: July 16, 2020Date of Patent: April 19, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventor: Colette Mullenhoff
-
Publication number: 20220067948Abstract: A motion capture tracking device comprising a base portion including a first alignment feature, a first magnetic element and an attachment mechanism operative to mechanically couple the base portion to a rod, a detachable end cap configured to be removably mated with the base portion, and a plurality of motion capture markers coupled to the end cap. The detachable end cap can include a second alignment feature and a second magnetic element, such that, during a mating event in which the detachable end cap is coupled to the base portion, the second alignment feature cooperates with the first alignment feature to ensure that the base portion and detachable end cap are mated in accordance with a unique registration and the second magnetic feature cooperates with the first magnetic feature to magnetically retain the detachable end cap in physical contact with the base portion upon completion of the mating event.Type: ApplicationFiled: September 1, 2020Publication date: March 3, 2022Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventor: Paige M. Warner
-
Publication number: 20220058870Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: ApplicationFiled: November 5, 2021Publication date: February 24, 2022Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Patent number: 11238619Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: March 16, 2020Date of Patent: February 1, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Publication number: 20220005279Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: ApplicationFiled: September 22, 2021Publication date: January 6, 2022Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
-
Publication number: 20210407174Abstract: A method of rendering an image includes receiving information of a virtual camera, including a camera position and a camera orientation defining a virtual screen; receiving information of a target screen, including a target screen position and a target screen orientation defining a plurality of pixels, each respective pixel corresponding to a respective UV coordinate on the target screen; for each respective pixel of the target screen: determining a respective XY coordinate of a corresponding point on the virtual screen based on the camera position, the camera orientation, the target screen position, the target screen orientation, and the respective UV coordinate; tracing one or more rays from the virtual camera through the corresponding point on the virtual screen toward a virtual scene; and estimating a respective color value for the respective pixel based on incoming light from virtual objects in the virtual scene that intersect the one or more rays.Type: ApplicationFiled: June 30, 2020Publication date: December 30, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Nicholas Walker, David Weitzberg, André Mazzone
-
Publication number: 20210407199Abstract: A method of edge loop selection includes accessing a polygon mesh; receiving a selection of a first edge connected to a first non-four-way intersection vertex; receiving, after receiving the selection of the first edge, a selection of a second edge connected to the first non-four-way intersection vertex; in response to receiving a command invoking an edge loop selection process: evaluating a topological relationship between the first edge and the second edge; determining a rule for processing a non-four-way intersection vertex based on the topological relationship between the first edge and the second edge; and completing an edge loop by, from the second edge, processing each respective four-way intersection vertex by choosing a middle edge as a next edge at the respective four-way intersection vertex, and processing each respective non-four-way intersection vertex based on the rule.Type: ApplicationFiled: July 16, 2020Publication date: December 30, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventor: Colette Mullenhoff
-
Patent number: 11200752Abstract: In at least one embodiment, an immersive content generation system may receive a first user input that defines a three-dimensional (3D) volume within a performance area. In at least one embodiment, the system may capture a plurality of images of an object in the performance area using a camera, wherein the object is at least partially surrounded by one or more displays presenting images of a virtual environment. In at least one embodiment, the system may receive a second user input to adjust a color value of a virtual image of the object as displayed in the images in the virtual environment. In at least one embodiment, the system may perform a color correction pass for the displayed images of the virtual environment. In at least one embodiment, the system may generate content based on the plurality of captured images that are corrected via the color correction pass.Type: GrantFiled: August 21, 2020Date of Patent: December 14, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11170571Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: GrantFiled: November 15, 2019Date of Patent: November 9, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Publication number: 20210342971Abstract: A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.Type: ApplicationFiled: April 13, 2021Publication date: November 4, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Douglas G. Watkins, Paige M. Warner, Dacklin R. Young
-
Patent number: 11145125Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: October 12, 2021Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11132838Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Richard Bluff, Lutz Latta
-
Patent number: 11132837Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
-
Patent number: 11128984Abstract: A method may include causing first content to be displayed on a display device; causing second content to be rendered irrespective of a location of a mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content. When the second content has moved a predetermined distance from the screen, The method may also include causing the second content to be rendered based on the location of the mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content.Type: GrantFiled: October 25, 2019Date of Patent: September 21, 2021Assignee: LUCASFILM ENIERTAINMENT COMPANY LTD.Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
-
Patent number: 11113885Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: September 7, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11107195Abstract: An immersive content production system may capture a plurality of images of a physical object in a performance area using a taking camera. The system may determine an orientation and a velocity of the taking camera with respect to the physical object in the performance area. A user may select a first amount of motion blur exhibited by the images of the physical object based on a desired motion effects. The system may determine a correction to apply to a virtual object based at least in part on the orientation and the velocity of the taking camera and the desired motion blur effect. The system may also detect the distance from the taking camera to a physical object and the taking camera to the virtual display. The system may use these distances to generate a corrected circle of confusion for the virtual images on the display.Type: GrantFiled: August 21, 2020Date of Patent: August 31, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11099654Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.Type: GrantFiled: April 17, 2020Date of Patent: August 24, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: D976992Type: GrantFiled: May 22, 2020Date of Patent: January 31, 2023Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventor: Paige Warner