Patents by Inventor Stephane Grabli

Stephane Grabli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230316587
    Abstract: A computer-implemented method of changing a face within an output image or video frame that includes: receiving an input image that includes a face presenting a facial expression in a pose; processing the image with a neural network encoder to generate a latent space point that is an encoded representation of the image; decoding the latent space point to generate an initial output image in accordance with a desired facial identity but with the facial expression and pose of the face in the input image; identifying a feature of the facial expression in the initial output image to edit; applying an adjustment vector to a latent space point corresponding to the initial output image to generate an adjusted latent space point; and decoding the adjusted latent space point to generate an adjusted output image in accordance with the desired facial identity but with the facial expression and pose of the face in the input image altered in accordance with the adjustment vector
    Type: Application
    Filed: March 29, 2022
    Publication date: October 5, 2023
    Applicants: LUCASFILM ENTERTAINMENT COMPANY LTD. LLC, DISNEY ENTERPRISES, INC
    Inventors: Sirak Ghebremusse, Stéphane Grabli, Jacek Krzysztof Naruniec, Romann Matthew Weber, Christopher Richard Schroers
  • Patent number: 11671717
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: June 6, 2023
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Patent number: 11069135
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: July 20, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
  • Patent number: 11049332
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: June 29, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
  • Patent number: 10812693
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: October 20, 2020
    Assignee: LucasFilm Entertainment Company Ltd.
    Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20200288050
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20200286284
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.
    Type: Application
    Filed: November 12, 2019
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
  • Publication number: 20200286301
    Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; and
    Type: Application
    Filed: March 3, 2020
    Publication date: September 10, 2020
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
  • Patent number: 10701253
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: June 30, 2020
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20190124244
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: August 13, 2018
    Publication date: April 25, 2019
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Publication number: 20190122374
    Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.
    Type: Application
    Filed: August 13, 2018
    Publication date: April 25, 2019
    Applicant: Lucasfilm Entertainment Company Ltd.
    Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
  • Patent number: 9854176
    Abstract: Systems and techniques for dynamically capturing and reconstructing lighting are provided. The systems and techniques may be based on a stream of images capturing the lighting within an environment as a scene is shot. Reconstructed lighting data may be used to illuminate a character in a computer-generated environment as the scene is shot. For example, a method may include receiving a stream of images representing lighting of a physical environment. The method may further include compressing the stream of images to reduce an amount of data used in reconstructing the lighting of the physical environment and may further include outputting the compressed stream of images for reconstructing the lighting of the physical environment using the compressed stream, the reconstructed lighting being used to render a computer-generated environment.
    Type: Grant
    Filed: January 24, 2014
    Date of Patent: December 26, 2017
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Michael Sanders, Kiran Bhat, Curt Isamu Miyashiro, Jason Snell, Stephane Grabli
  • Publication number: 20150215623
    Abstract: Systems and techniques for dynamically capturing and reconstructing lighting are provided. The systems and techniques may be based on a stream of images capturing the lighting within an environment as a scene is shot. Reconstructed lighting data may be used to illuminate a character in a computer-generated environment as the scene is shot. For example, a method may include receiving a stream of images representing lighting of a physical environment. The method may further include compressing the stream of images to reduce an amount of data used in reconstructing the lighting of the physical environment and may further include outputting the compressed stream of images for reconstructing the lighting of the physical environment using the compressed stream, the reconstructed lighting being used to render a computer-generated environment.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 30, 2015
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Michael SANDERS, Kiran BHAT, Curt Isamu MIYASHIRO, Jason SNELL, Stephane GRABLI
  • Patent number: 8411967
    Abstract: A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: April 2, 2013
    Assignee: Auryn Inc.
    Inventors: Stephane Grabli, Robert Kalnins, Nathan LeZotte, Amitabh Agrawal
  • Publication number: 20110205233
    Abstract: A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames.
    Type: Application
    Filed: May 2, 2011
    Publication date: August 25, 2011
    Inventors: Stephane Grabli, Robert Kalnins, Nathan LaZotte, Amitabh Agrawal
  • Patent number: 7936927
    Abstract: A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames.
    Type: Grant
    Filed: January 29, 2007
    Date of Patent: May 3, 2011
    Assignee: Auryn Inc.
    Inventors: Stephane Grabli, Robert Kalnins, Nathan LeZotte, Amitabh Agrawal
  • Patent number: 7746344
    Abstract: A renderer for performing stroke-based rendering determines whether two given overlapping strokes depict an occlusion in a three-dimensional scene. The renderer may then use this information to determine whether to apply an occlusion constraint between the strokes when rendering an image or a frame from an animation. In one implementation, the renderer determines whether the two strokes together depict a single view patch of surface in the scene (i.e., a single portion of three-dimensional surface in the scene as seen from the rendering viewpoint). The renderer builds an image-space patch of surface defined from the union of the two overlapping strokes and then determines whether there exists a single three-dimensional view patch of surface that projects onto the image-space patch and that contains both strokes' three-dimensional anchor points. Which stroke occludes the other can be determined by the relative three-dimensional depth of the strokes' anchor points from the rendering viewpoint.
    Type: Grant
    Filed: January 29, 2007
    Date of Patent: June 29, 2010
    Assignee: Auryn Inc.
    Inventors: Stephane Grabli, Robert Kalnins, Amitabh Agrawal, Nathan LeZotte
  • Publication number: 20070176929
    Abstract: A renderer for performing stroke-based rendering determines whether two given overlapping strokes depict an occlusion in a three-dimensional scene. The renderer may then use this information to determine whether to apply an occlusion constraint between the strokes when rendering an image or a frame from an animation. In one implementation, the renderer determines whether the two strokes together depict a single view patch of surface in the scene (i.e., a single portion of three-dimensional surface in the scene as seen from the rendering viewpoint). The renderer builds an image-space patch of surface defined from the union of the two overlapping strokes and then determines whether there exists a single three-dimensional view patch of surface that projects onto the image-space patch and that contains both strokes' three-dimensional anchor points. Which stroke occludes the other can be determined by the relative three-dimensional depth of the strokes' anchor points from the rendering viewpoint.
    Type: Application
    Filed: January 29, 2007
    Publication date: August 2, 2007
    Inventors: Stephane Grabli, Robert Kalnins, Amitabh Agrawal, Nathan LeZotte
  • Publication number: 20070177802
    Abstract: A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames.
    Type: Application
    Filed: January 29, 2007
    Publication date: August 2, 2007
    Inventors: Stephane Grabli, Robert Kalnins, Nathan LeZotte, Amitabh Agrawal