Patents by Inventor Christoph Bregler

Christoph Bregler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9317954
    Abstract: Techniques for facial performance capture using an adaptive model are provided herein. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blendshapes using the three-dimensional scan, each of one or more blendshapes of the set of blendshapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject, the input data including video data and depth data, tracking body deformations of the subject by fitting the input data using one or more of the blendshapes of the set, and fitting a refined linear model onto the input data using one or more adaptive principal component analysis shapes.
    Type: Grant
    Filed: December 26, 2013
    Date of Patent: April 19, 2016
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Hao Li, Jihun Yu, Yuting Ye, Christoph Bregler
  • Patent number: 9002064
    Abstract: A computer program product tangibly embodied in a computer-readable storage medium includes instructions that when executed by a processor perform a method. The method includes identifying a frame of a video sequence, transforming a model into an initial guess for how the region appears in the frame, performing an exhaustive search of the frame, performing a plurality of optimization procedures, wherein at least one additional model parameter is taken into account as each subsequent optimization procedure is initiated. A system includes a computer readable storage medium, a graphical user interface, an input device, a model for texture and shape of the region, the model generated using the video sequence and stored in the computer readable storage medium, and a solver component.
    Type: Grant
    Filed: November 4, 2013
    Date of Patent: April 7, 2015
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Christoph Bregler, Kiran S. Bhat, Brett A. Allen
  • Publication number: 20150084950
    Abstract: Techniques for facial performance capture using an adaptive model are provided herein. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blendshapes using the three-dimensional scan, each of one or more blendshapes of the set of blendshapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject, the input data including video data and depth data, tracking body deformations of the subject by fitting the input data using one or more of the blendshapes of the set, and fitting a refined linear model onto the input data using one or more adaptive principal component analysis shapes.
    Type: Application
    Filed: December 26, 2013
    Publication date: March 26, 2015
    Applicant: LucasFilm Entertainment Company Ltd.
    Inventors: Hao LI, Jihun YU, Yuting YE, Christoph BREGLER
  • Publication number: 20140219499
    Abstract: A computer program product tangibly embodied in a computer-readable storage medium includes instructions that when executed by a processor perform a method. The method includes identifying a frame of a video sequence, transforming a model into an initial guess for how the region appears in the frame, performing an exhaustive search of the frame, performing a plurality of optimization procedures, wherein at least one additional model parameter is taken into account as each subsequent optimization procedure is initiated. A system includes a computer readable storage medium, a graphical user interface, an input device, a model for texture and shape of the region, the model generated using the video sequence and stored in the computer readable storage medium, and a solver component.
    Type: Application
    Filed: November 4, 2013
    Publication date: August 7, 2014
    Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Christoph Bregler, Kiran S. Bhat, Brett A. Allen
  • Patent number: 8649555
    Abstract: A computer program product tangibly embodied in a computer-readable storage medium includes instructions that when executed by a processor perform a method. The method includes identifying a frame of a video sequence, transforming a model into an initial guess for how the region appears in the frame, performing an exhaustive search of the frame, performing a plurality of optimization procedures, wherein at least one additional model parameter is taken into account as each subsequent optimization procedure is initiated. A system includes a computer readable storage medium, a graphical user interface, an input device, a model for texture and shape of the region, the model generated using the video sequence and stored in the computer readable storage medium, and a solver component.
    Type: Grant
    Filed: October 28, 2009
    Date of Patent: February 11, 2014
    Assignee: Lucasfilm Entertainment Company Ltd.
    Inventors: Christoph Bregler, Kiran S. Bhat, Brett A. Allen
  • Publication number: 20100104018
    Abstract: Provided and described herein are, e.g., exemplary embodiments of systems, methods, procedures, devices, computer-accessible media, computing arrangements and processing arrangements in accordance with the present disclosure related to body signature recognition and acoustic speaker verification utilizing body language features. For example, certain exemplary embodiments can include a computer-accessible medium containing executable instructions thereon. When one or more computing arrangements executes the instructions, the computing arrangement(s) can be configured to perform certain exemplary procedures, including (i) receiving first information relating to one or more visual features from a video, (ii) determining second information relating to motion vectors as a function of the first information, and (iii) computing a statistical representation of a plurality of frames of the video based on the second information.
    Type: Application
    Filed: August 11, 2009
    Publication date: April 29, 2010
    Applicant: New York University
    Inventors: Christoph Bregler, Cuong George Williams, Ian McDowall, Sally Rosenthal
  • Publication number: 20060192852
    Abstract: A system, method, software arrangement and computer-accessible medium are provided for tracking moveable objects, such as large balls, that a group of participants can interact with. For example, the motion of the objects may be used to control or influence the motion of certain virtual objects generated in a virtual environment, which may interact with other virtual objects. The virtual objects and their interactions may be used to generate video information, which can be displayed to the participants and which may indicate the occurrence of certain game-related events. Audio information may also be generated based on the interactions, and used to produce sounds separately from or in conjunction with the virtual objects.
    Type: Application
    Filed: February 9, 2006
    Publication date: August 31, 2006
    Inventors: Sally Rosenthal, Christoph Bregler, Clothilde Castiglia, Jessica De Vincenzo, Roger Dubois, Kevin Feeley, Tom Igoe, Jonathan Meyer, Michael Naimark, Alexandru Postelnicu, Michael Rabinovich, Katie Salen, Jeremi Sudol, Bo Wright
  • Patent number: 6188776
    Abstract: The identification of hidden data, such as feature-based control points in an image, from a set of observable data, such as the image, is achieved through a two-stage approach. The first stage involves a learning process, in which a number of sample data sets, e.g. images, are analyzed to identify the correspondence between observable data, such as visual aspects of the image, and the desired hidden data, such as the control points. Two models are created. A feature appearance-only model is created from aligned examples of the feature in the observed data. In addition, each labeled data set is processed to generate a coupled model of the aligned observed data and the associated hidden data. In the image processing embodiment, these two models might be affine manifold models of an object's appearance and of the coupling between that appearance and a set of locations on the object's surface.
    Type: Grant
    Filed: May 21, 1996
    Date of Patent: February 13, 2001
    Assignee: Interval Research Corporation
    Inventors: Michele Covell, Christoph Bregler
  • Patent number: 5880788
    Abstract: The synchronization of an existing video to a new soundtrack is carried out through the phonetic analysis of the original soundtrack and the new soundtrack. Individual speech sounds, such as phones, are identified in the soundtrack for the original video recording, and the images corresponding thereto are stored. The new soundtrack is similarly analyzed to identify individual speech sounds, which are used to select the stored images and create a new video sequence. The sequence of images are then smoothly fitted to one another, to provide a video stream that is synchronized to the new soundtrack. This approach permits a given video sequence to be synchronized to any arbitrary utterance. Furthermore, the matching of the video images to the new speech sounds can be carried out in a highly automated manner, thereby reducing required manual effort.
    Type: Grant
    Filed: March 25, 1996
    Date of Patent: March 9, 1999
    Assignee: Interval Research Corporation
    Inventor: Christoph Bregler