Lucasfilm Patent Applications

Lucasfilm patent applications that are pending before the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096035
    Abstract: A method of content production may include receiving tracking information for a camera with a frustum configured to capture images of a subject in an immersive environment. a first image of a virtual environment corresponding to the frustum may be rendered using a first rendering process based on the tracking information to be perspective-correct when displayed on the displays and viewed through the camera. A second image of the virtual environment may also be rendered using a second rendering process for a specific display. The first image and the second image may be rendered in parallel. The second image and a portion of the first image may be composited together to generate a composite image, where the portion of the first image may correspond to a portion of the display captured by the frustum.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 21, 2024
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Nicholas Rasmussen, Lutz Latta
  • Publication number: 20240038256
    Abstract: Some implementations of the disclosure relate to a non-transitory computer-readable medium having executable instructions stored thereon that, when executed by a processor, cause a system to perform operations comprising: obtaining a first energy-based target for audio; obtaining a first version of a sound mix including one or more audio components; computing, for each audio frame of multiple audio frames of each of the one or more audio components, a first audio feature measurement value; optimizing, based at least on the first energy-based target and the first audio feature measurement values, gain values of the audio frames; and after optimizing the gain values, applying the gain values to the first version of sound mix to obtain a second version of the sound mix.
    Type: Application
    Filed: August 1, 2022
    Publication date: February 1, 2024
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: NICOLAS TSINGOS, SCOTT LEVINE
  • Publication number: 20230336679
    Abstract: A motion capture system comprising: a master clock configured to repeatedly generate and output, at a frame rate, a primary clock signal that conveys when a video frame starts; a first camera configured to capture light within a first set of wavelengths and operably coupled to receive the master clock signal and initiate an image capture sequence on a frame-by-frame basis in fixed phase relationship with the primary clock signal to generate a first set of images at the frame rate from light captured within the first set of wavelengths; a synchronization module operably coupled to receive the master clock signal from the master clock and configured to generate a synchronization signal offset in time from and in a fixed relationship with the primary clock signal; and a second camera configured to capture light within a second set of wavelengths, different than the first set of wavelengths, and operably coupled to receive the synchronization signal and initiate an image capture sequence on the frame-by-frame bas
    Type: Application
    Filed: March 29, 2023
    Publication date: October 19, 2023
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Robert Derry, Gary P. Martinez, Brian Hook
  • Publication number: 20230326142
    Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.
    Type: Application
    Filed: June 13, 2023
    Publication date: October 12, 2023
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
  • Publication number: 20230316587
    Abstract: A computer-implemented method of changing a face within an output image or video frame that includes: receiving an input image that includes a face presenting a facial expression in a pose; processing the image with a neural network encoder to generate a latent space point that is an encoded representation of the image; decoding the latent space point to generate an initial output image in accordance with a desired facial identity but with the facial expression and pose of the face in the input image; identifying a feature of the facial expression in the initial output image to edit; applying an adjustment vector to a latent space point corresponding to the initial output image to generate an adjusted latent space point; and decoding the adjusted latent space point to generate an adjusted output image in accordance with the desired facial identity but with the facial expression and pose of the face in the input image altered in accordance with the adjustment vector
    Type: Application
    Filed: March 29, 2022
    Publication date: October 5, 2023
    Applicants: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC, DISNEY ENTERPRISES, INC
    Inventors: Sirak Ghebremusse, Stéphane Grabli, Jacek Krzysztof Naruniec, Romann Matthew Weber, Christopher Richard Schroers
  • Publication number: 20230136632
    Abstract: Some implementations of the disclosure relate to a method, comprising: obtaining, at a computing device, first video clip data including multiple sequential video frames, the multiple sequential video frames including at least a first video frame and a second video frame that occurs after the first video frame; inputting, at the computing device, the first video clip data into at least one trained model that automatically predicts, based on at least features of the first video frame and features of the second video frame, sound effect data corresponding to the second video frame; and determining, at the computing device, based on the sound effect data predicted for the second video frame, a first sound effect file corresponding to the second video frame.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 4, 2023
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Nicolas Tsingos, Scott Levine, Stephen Morris
  • Publication number: 20220343562
    Abstract: In some implementations, a computing device in communication with an immersive content generation system may generate a first set of user interface elements configured to receive a first selection of a shape of a virtual stage light. In addition, the device may generate a second set of user interface elements configured to receive a second selection of an image for the virtual stage light. Also, the device may generate a third set of user interface elements configured to receive a third selection of a position and an orientation of the virtual stage light. Further, the generate a fourth set of user interface elements configured to receive a fourth selection of a color for the virtual stage light. Numerous other aspects are described.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 27, 2022
    Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.
    Inventors: David Hirschfield, Michael Jutan
  • Publication number: 20220343591
    Abstract: A computing device in communication with an immersive content generation system can generate and present images of a virtual environment on one or more light-emitting diode (LED) displays at least partially surrounding a performance area. The device may capture a plurality of images of a performer or a physical object in the performance area along with at least some portion of the images of the virtual environment by a taking camera. The device may identify a color mismatch between a portion of the performer or the physical object and a virtual image of the performer or the physical object in the images of the virtual environment. The device may generate a patch for the images of the virtual environment to correct the color mismatch. The device may insert the patch into the images of the virtual environment. Also, the device may generate content based on the plurality of captured images.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 27, 2022
    Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.
    Inventors: Michael Jutan, David Hirschfield, Alan Bucior
  • Publication number: 20220342488
    Abstract: In some implementations, an apparatus may include a housing enclosing a circuitry may include a processor and a memory, the housing forming a handgrip. In addition, the apparatus may include a plurality of light sensors arranged in a particular configuration, each of the plurality of light sensors coupled to an exterior the housing via a sensor arm. Also, the apparatus may include one or more controls mounted on the exterior of the housing and electrically coupled to the circuitry. The apparatus can include one or more antenna mounted on an exterior of the housing; and a transmitter connected to the circuitry and electrically connected to the one or more antenna to send data from the apparatus via a wireless protocol. The apparatus can include an electronic device for mounting an electronic device to the housing, the electronic device configured to execute an application for an immersive content generation system.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 27, 2022
    Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.
    Inventors: Michael Jutan, David Hirschfield, Robert Derry, Gary Martinez
  • Publication number: 20220343590
    Abstract: In at least one embodiment, an immersive content generation system may receive a first input from a user indicating a lighting value. The computing device may receive a second input indicating a region of an immersive virtual environment to which the lighting value is to be applied. The computing device may apply the lighting value to the region of the immersive virtual environment. The computing device may output one or more images of the immersive virtual environment, the one or more images based, in part, on the input lighting value. Numerous other aspects are described.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 27, 2022
    Applicant: LUCASFILM ENTERNTAINMENT COMPANY LTD.
    Inventors: Michael Jutan, David Hirschfield, Jeff Webster, Scott Richards
  • Publication number: 20220345234
    Abstract: Some implementations of the disclosure relate to using a model trained on mixing console data of sound mixes to automate the process of sound mix creation. In one implementation, a non-transitory computer-readable medium has executable instructions stored thereon that, when executed by a processor, causes the processor to perform operations comprising: obtaining a first version of a sound mix; extracting first audio features from the first version of the sound mix obtaining mixing metadata; automatically calculating with a trained model, using at least the mixing metadata and the first audio features, mixing console features; and deriving a second version of the sound mix using at least the mixing console features calculated by the trained model.
    Type: Application
    Filed: April 21, 2021
    Publication date: October 27, 2022
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Stephen Morris, Scott Levine, Nicolas Tsingos
  • Publication number: 20220067948
    Abstract: A motion capture tracking device comprising a base portion including a first alignment feature, a first magnetic element and an attachment mechanism operative to mechanically couple the base portion to a rod, a detachable end cap configured to be removably mated with the base portion, and a plurality of motion capture markers coupled to the end cap. The detachable end cap can include a second alignment feature and a second magnetic element, such that, during a mating event in which the detachable end cap is coupled to the base portion, the second alignment feature cooperates with the first alignment feature to ensure that the base portion and detachable end cap are mated in accordance with a unique registration and the second magnetic feature cooperates with the first magnetic feature to magnetically retain the detachable end cap in physical contact with the base portion upon completion of the mating event.
    Type: Application
    Filed: September 1, 2020
    Publication date: March 3, 2022
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventor: Paige M. Warner
  • Publication number: 20220058870
    Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.
    Type: Application
    Filed: November 5, 2021
    Publication date: February 24, 2022
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Matthew Cong, Ronald Fedkiw, Lana Lan
  • Publication number: 20220005279
    Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 6, 2022
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
  • Publication number: 20210407199
    Abstract: A method of edge loop selection includes accessing a polygon mesh; receiving a selection of a first edge connected to a first non-four-way intersection vertex; receiving, after receiving the selection of the first edge, a selection of a second edge connected to the first non-four-way intersection vertex; in response to receiving a command invoking an edge loop selection process: evaluating a topological relationship between the first edge and the second edge; determining a rule for processing a non-four-way intersection vertex based on the topological relationship between the first edge and the second edge; and completing an edge loop by, from the second edge, processing each respective four-way intersection vertex by choosing a middle edge as a next edge at the respective four-way intersection vertex, and processing each respective non-four-way intersection vertex based on the rule.
    Type: Application
    Filed: July 16, 2020
    Publication date: December 30, 2021
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventor: Colette Mullenhoff
  • Publication number: 20210407174
    Abstract: A method of rendering an image includes receiving information of a virtual camera, including a camera position and a camera orientation defining a virtual screen; receiving information of a target screen, including a target screen position and a target screen orientation defining a plurality of pixels, each respective pixel corresponding to a respective UV coordinate on the target screen; for each respective pixel of the target screen: determining a respective XY coordinate of a corresponding point on the virtual screen based on the camera position, the camera orientation, the target screen position, the target screen orientation, and the respective UV coordinate; tracing one or more rays from the virtual camera through the corresponding point on the virtual screen toward a virtual scene; and estimating a respective color value for the respective pixel based on incoming light from virtual objects in the virtual scene that intersect the one or more rays.
    Type: Application
    Filed: June 30, 2020
    Publication date: December 30, 2021
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Nicholas Walker, David Weitzberg, André Mazzone
  • Publication number: 20210342971
    Abstract: A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.
    Type: Application
    Filed: April 13, 2021
    Publication date: November 4, 2021
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Douglas G. Watkins, Paige M. Warner, Dacklin R. Young
  • Publication number: 20210150810
    Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Matthew Cong, Ronald Fedkiw, Lana Lan
  • Publication number: 20200394999
    Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E sound mix using at least the calculated content feature data.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 17, 2020
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Scott Levine, Stephen Morris
  • Publication number: 20200286301
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and
    Type: Application
    Filed: March 3, 2020
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
  • Publication number: 20200286284
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.
    Type: Application
    Filed: November 12, 2019
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
  • Publication number: 20200288050
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20200265638
    Abstract: Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow on a surface in the physical environment, the created shadow corresponding to the virtual object displayed by the MR display. In some implementations, the method includes: loading in a memory of the MR display system, a first version of the scene including the virtual object; and loading in a memory of the projector display system a second version of the scene including a virtual surface onto which the virtual object casts a shadow.
    Type: Application
    Filed: February 20, 2019
    Publication date: August 20, 2020
    Applicant: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Michael Koperwas, Lutz Latta
  • Publication number: 20200249765
    Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.
    Type: Application
    Filed: April 17, 2020
    Publication date: August 6, 2020
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Darby Johnston, Ian Wakelin
  • Publication number: 20200143592
    Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.
    Type: Application
    Filed: November 6, 2019
    Publication date: May 7, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Roger CORDES, Richard BLUFF, Lutz LATTA
  • Publication number: 20200145644
    Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.
    Type: Application
    Filed: November 6, 2019
    Publication date: May 7, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC
    Inventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
  • Publication number: 20190122374
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: August 13, 2018
    Publication date: April 25, 2019
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20190124244
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: August 13, 2018
    Publication date: April 25, 2019
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20170195527
    Abstract: A handheld device includes: an input control configured to control and modify a virtual scene including a virtual camera; and a display that shows a representation of the controlled and modified virtual scene generated by the virtual camera. A system includes: a computer system configured to execute program instructions for generating a virtual scene including a virtual camera; and handheld device configured to communicate with the computer system for controlling and modifying the virtual scene, the handheld device comprising: an input control configured to control and modify the virtual scene; and a display that shows a representation of the controlled and modified virtual scene generated by the virtual camera.
    Type: Application
    Filed: March 20, 2017
    Publication date: July 6, 2017
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Spencer Reynolds, Michael Sanders, Kevin Wooley, Steve Sullivan, Adam Schnitzer
  • Publication number: 20170178382
    Abstract: A multi-channel tracking pattern is provided along with techniques and systems for performing motion capture using the multi-channel tracking pattern. The multi-channel tracking pattern includes a plurality of shapes having different colors on different portions of the pattern. The portions with the unique shapes and colors allow a motion capture system to track motion of an object bearing the pattern across a plurality of video frames.
    Type: Application
    Filed: February 11, 2016
    Publication date: June 22, 2017
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventor: John Levin
  • Publication number: 20170148201
    Abstract: Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input data including position information defining positions on a face of the subject. The method may further include obtaining an animation model for the subject that includes adjustable controls that control the animation model to define facial features of the computer-generated representation of the face, and matching one or more of the positions on the face with one or more corresponding positions on the animation model.
    Type: Application
    Filed: February 3, 2017
    Publication date: May 25, 2017
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Kiran Bhat, Michael Koperwas, Jeffery Yost, Ji Hun Yu, Sheila Santos
  • Publication number: 20170084072
    Abstract: A method includes receiving a first motion path for an object, where an orientation of the object is not aligned with the first motion path for the object for at least a portion of the first motion path. The method also includes receiving a first motion path for a virtual camera and determining a speed of the object along the first motion path for the object. The method additionally includes calculating a second motion path for the object based on the speed of the object along the first motion path for the object and the orientation of the object, where the second motion path of the object is aligned with second motion path. The method further includes calculating a second motion path for the virtual camera based on a difference between the first motion path of the object and the second motion path of the object.
    Type: Application
    Filed: September 23, 2015
    Publication date: March 23, 2017
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventor: David Weitzberg
  • Publication number: 20170046865
    Abstract: Systems and techniques are provided for performing animation motion capture of objects within an environment. For example, a method may include obtaining input data including a three-dimensional point cloud of the environment. The three-dimensional point cloud is generated using a three-dimensional laser scanner including multiple laser emitters and multiple laser receivers. The method may further include obtaining an animation model for an object within the environment. The animation model includes a mesh, an animation skeleton rig, and adjustable controls that control the animation skeleton rig to define a position of one or more faces of the mesh. The method may further include determining a pose of the object within the environment. Determining a pose includes fitting the one or more faces of the mesh to one or more points of a portion of the three-dimensional point cloud. The portion of the three-dimensional point cloud corresponds to the object in the environment.
    Type: Application
    Filed: August 14, 2015
    Publication date: February 16, 2017
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventor: Brian Cantwell
  • Publication number: 20160284136
    Abstract: A system and method facilitating a user to manipulate a virtual reality (VR) environment are disclosed. The user may provide an input via a touch sensitive surface of a computing device associated with the user to bind a virtual object in the VR environment to the computing device. The user may then move and/or rotate the computing device to cause the bound virtual object to move and/or rotate in the VR environment accordingly. In some examples, the bound virtual object may cast a ray into the VR environment. The movement and/or rotation of the virtual object controlled by the computing device in those examples can change the direction of the ray. In some examples, the virtual object may include a virtual camera. In those examples, the user may move and/or rotate the virtual camera in the VR environment by moving and/or rotate the computing device.
    Type: Application
    Filed: September 30, 2015
    Publication date: September 29, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Darby Johnston, Ian Wakelin
  • Publication number: 20160283081
    Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.
    Type: Application
    Filed: September 30, 2015
    Publication date: September 29, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Darby Johnston, Ian Wakelin
  • Publication number: 20160227262
    Abstract: Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie may be displayed using different formats and resolutions depending on which mode the movie is being presented.
    Type: Application
    Filed: March 31, 2016
    Publication date: August 4, 2016
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Andrew Grant, Lutz Markus Latta, Ian Wakelin, Darby Johnston, John Gaeta
  • Publication number: 20160180501
    Abstract: Methods and systems efficiently apply known distortion, such as of a camera and lens, to source image data to produce data of an output image with the distortion. In an embodiment, an output image field is segmented into regions so that on each segment the distortion function is approximately linear, and segmentation data is stored in a quadtree. The distortion function is applied to the segmented image field to produce a segmented rendered distortion image (SRDI) and a corresponding look-up table. To distort a source image, a location in the output image field is selected, and the uniquely colored segment at the same location in the SRDI is found. The look-up table provides the local linear inverse of the distortion function, which is applied to determine from where in the source image to take image texture data for the distorted output image.
    Type: Application
    Filed: December 22, 2014
    Publication date: June 23, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventor: RONALD MALLET
  • Publication number: 20160093112
    Abstract: A method may include receiving a plurality of objects from a 3-D virtual scene. The plurality of objects may be arranged in a hierarchy. The method may also include generating a plurality of identifiers for the plurality of objects. The plurality of identifiers may include a first identifier for a first object in the plurality of objects, and the identifier may be generated based on a position of the first object in the hierarchy. The method may additionally include performing a rendering operation on the plurality of objects to generate a deep image. The deep image may include a plurality of samples that correspond to the first object. The method may further include propagating the plurality of identifiers through the rendering operation such that each of the plurality of samples in the deep image that correspond to the first object are associated with the identifier.
    Type: Application
    Filed: September 30, 2014
    Publication date: March 31, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventors: SHIJUN HAW, XAVIER BERNASCONI
  • Publication number: 20160078675
    Abstract: Methods are disclosed for the computer generation of data for images that include hair, fur, or other strand-like material. A volume for the hair is specified, having a plurality of surfaces. A fluid flow simulation is performed within the volume, with a first surface of the volume being a source area through which fluid is simulated to enter the volume, and a second surface being an exit surface through which fluid is simulated as exiting the volume. The fluid flow simulation may be used to produce fluid flow lines, such as from a velocity vector field for the fluid. Fluid flow lines are selected, and image data of hairs that follow the fluid flow lines are generated. Other embodiments include generating animation sequences by generating images wherein the volume and surfaces vary between frames.
    Type: Application
    Filed: September 16, 2014
    Publication date: March 17, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stephen D. Bowline, Nicholas Grant Rasmussen
  • Publication number: 20160012598
    Abstract: A system includes a visual data collector for collecting visual information from an image of one or more features of an object. The system also includes a physical data collector for collecting sensor information provided by at one or more sensors attached to the object. The system also includes a computer system that includes a motion data combiner for combining the visual information the sensor information. The motion data combiner is configured to determine the position of a representation of one or more of the feature in a virtual representation of the object from the combined visual information and sensor information. Various types of virtual representations may be provided from the combined information, for example, one or more poses (e.g., position and orientation) of the object may be represented.
    Type: Application
    Filed: September 21, 2015
    Publication date: January 14, 2016
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventors: STEVE SULLIVAN, KEVIN WOOLEY, BRETT A. ALLEN, MICHAEL SANDERS
  • Publication number: 20150350628
    Abstract: A method may include presenting a scene from linear content on one or more display devices in an immersive environment, and receiving, from a user within the immersive environment, input to change an aspect of the scene. The method may also include accessing 3-D virtual scene information previously used to render the scene, and changing the 3-D virtual scene information according to the changed aspect of the scene. The method may additionally include rending the 3-D virtual scene to incorporate the changed aspect, and presenting the rendered scene in real time in the immersive user environment.
    Type: Application
    Filed: December 15, 2014
    Publication date: December 3, 2015
    Applicant: Lucasfilm Entertainment Co. Ltd.
    Inventors: Mike Sanders, Kim Libreri, Nick Rasmussen, John Gaeta
  • Publication number: 20150348326
    Abstract: A method may include displaying, on one or more display devices in a virtual-reality environment, a visual representation of a 3-D virtual scene from the perspective of a subject location in the virtual-reality environment. The method may also include displaying, on the one or more display devices, a chroma-key background with the visual representation. The method may further include recording, using a camera, an image of the subject in the virtual-reality environment against the chroma-key background.
    Type: Application
    Filed: September 11, 2014
    Publication date: December 3, 2015
    Applicant: Lucasfilm Entertainment Co. Ltd.
    Inventors: Mike Sanders, Kim Libreri, Nick Rasmussen, John Gaeta
  • Publication number: 20150317765
    Abstract: A method of compressing a deep image representation may include receiving a deep image, where the deep image may include multiple pixels, and where each pixel in the deep image may include multiple samples. The method may also include compressing the deep image by combining samples in each pixel that are associated with the same primitives. This process may be repeated on a pixel-by-pixel basis. Some embodiments may use primitive IDs to match pixels to primitives through the rendering and compositing process.
    Type: Application
    Filed: April 30, 2014
    Publication date: November 5, 2015
    Applicant: Lucasfilm Entertainment Company, Ltd.
    Inventor: Shijun Haw
  • Publication number: 20150294492
    Abstract: A method of generating unrecorded camera views may include receiving a plurality of 2-D video sequences of a subject in a real 3-D space, where each 2-D video sequence may depict the subject from a different perspective. The method may also include generating a 3-D representation of the subject in a virtual 3-D space, where a geometry and texture of the 3-D representation may be generated based on the 2D video sequences, and the motion of the 3-D representation in the virtual 3-D space is based on motion of the subject in the real 3-D space. The method may additionally include generating a 2-D video sequence of the motion of the 3D representation using a virtual camera in the virtual 3-D space where the perspective of the virtual camera may be different than the perspectives of the plurality of 2-D video sequences.
    Type: Application
    Filed: August 25, 2014
    Publication date: October 15, 2015
    Applicant: Lucasfilm Entertainment Co., Ltd.
    Inventors: Hilmar Koch, Ronald Mallet, Kim Libreri, Paige Warner, Mike Sanders, John Gaeta
  • Publication number: 20150288956
    Abstract: An apparatus is disclosed which may serve as a target for calibrating a camera. The apparatus comprises one or more planar surfaces. The apparatus includes at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable.
    Type: Application
    Filed: April 8, 2014
    Publication date: October 8, 2015
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventors: RONALD MALLET, JASON SNELL, JEFF SALTZMAN, DOUGLAS MOORE, PAIGE WARNER
  • Publication number: 20150288951
    Abstract: Methods and systems are disclosed for calibrating a camera using a calibration target apparatus that contains at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable. Parameters of the camera are inferred from at least one image of the calibration target apparatus. In some embodiments, pixel coordinates of identified fiducial markings in an image are used with geometric knowledge of the apparatus to calculate camera parameters.
    Type: Application
    Filed: April 8, 2014
    Publication date: October 8, 2015
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventors: RONALD MALLET, JASON SNELL, JEFF SALTZMAN, DOUGLAS MOORE, PAIGE WARNER
  • Publication number: 20150235407
    Abstract: A method of applying a post-render motion blur to an object may include receiving a first image of the object. The first image need not be motion blurred, and the first image may include a first pixel and rendered color information for the first pixel. The method may also include receiving a second image of the object. The second image may be motion blurred, and the second image may include a second pixel and a location of the second pixel before the second image was motion blurred. Areas that are occluded in the second image may be identified and colored using a third image rendering only those areas. Unoccluded areas of the second image may be colored using information from the first image.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Victor Schutz, Patrick Conran
  • Publication number: 20150215623
    Abstract: Systems and techniques for dynamically capturing and reconstructing lighting are provided. The systems and techniques may be based on a stream of images capturing the lighting within an environment as a scene is shot. Reconstructed lighting data may be used to illuminate a character in a computer-generated environment as the scene is shot. For example, a method may include receiving a stream of images representing lighting of a physical environment. The method may further include compressing the stream of images to reduce an amount of data used in reconstructing the lighting of the physical environment and may further include outputting the compressed stream of images for reconstructing the lighting of the physical environment using the compressed stream, the reconstructed lighting being used to render a computer-generated environment.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 30, 2015
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Michael SANDERS, Kiran BHAT, Curt Isamu MIYASHIRO, Jason SNELL, Stephane GRABLI
  • Publication number: 20150130801
    Abstract: Among other aspects, on computer-implemented method includes: receiving at least one command in a computer system from a handheld device; positioning a virtual camera and controlling a virtual scene according to the command; and in response to the command, generating an output to the handheld device for displaying a view of the virtual scene as controlled on a display of the handheld device, the view captured by the virtual camera as positioned.
    Type: Application
    Filed: January 12, 2015
    Publication date: May 14, 2015
    Applicant: LUCASFILM ENTERTAINMENT COMPANY, LTD.
    Inventors: KEVIN WOOLEY, MICHAEL SANDERS, STEVE SULLIVAN, SPENCER REYNOLDS, BRIAN CANTWELL
  • Publication number: 20150084950
    Abstract: Techniques for facial performance capture using an adaptive model are provided herein. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blendshapes using the three-dimensional scan, each of one or more blendshapes of the set of blendshapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject, the input data including video data and depth data, tracking body deformations of the subject by fitting the input data using one or more of the blendshapes of the set, and fitting a refined linear model onto the input data using one or more adaptive principal component analysis shapes.
    Type: Application
    Filed: December 26, 2013
    Publication date: March 26, 2015
    Applicant: LucasFilm Entertainment Company Ltd.
    Inventors: Hao LI, Jihun YU, Yuting YE, Christoph BREGLER