Patents by Inventor Steve Sullivan
Steve Sullivan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8681158Abstract: A computer-implemented method includes comparing one or more surface features to a motion model. The surface feature or surface features represent a portion of an object in an image. The method also includes identifying a representation of the object from the motion model, based upon the comparison.Type: GrantFiled: March 5, 2012Date of Patent: March 25, 2014Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8674998Abstract: The present disclosure includes, among other things, systems, methods and program products for generating animation keyframes and a corresponding 3D animation sequence from a plurality of 2D images.Type: GrantFiled: August 29, 2008Date of Patent: March 18, 2014Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Adam Schnitzer, Steve Sullivan
-
Patent number: 8624904Abstract: A system includes a computer system capable of representing one or more animated characters. The computer system includes a blendshape manager that combines multiple blendshapes to produce the animated character. The computer system also includes an expression manager to respectively adjust one or more control parameters associated with each of the plurality of blendshapes for adjusting an expression of the animated character. The computer system also includes a corrective element manager that applies one or more corrective elements to the combined blendshapes based upon at least one of the control parameters. The one or more applied corrective elements are adjustable based upon one or more of the control parameters absent the introduction of one or more additional control parameters.Type: GrantFiled: June 22, 2012Date of Patent: January 7, 2014Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Michael Koperwas, Frederic P. Pighin, Cary Phillips, Steve Sullivan, Eduardo Hueso
-
Patent number: 8610713Abstract: In general, one or more aspects of the subject matter described in this specification can include associating with each clip in a sequence of one or more clips a copy of a three dimensional (3D) scene that was used to create the clip, where the clip is a sequence of one or more images that depict the clip's respective 3D scene from the perspective of one or more virtual cameras. Input identifying a clip in the sequence is received. In response to the receiving, a copy of the identified clip's associated copy of the 3D scene is presented in an editor.Type: GrantFiled: June 22, 2012Date of Patent: December 17, 2013Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Max S-Han Chen, Jeffery Bruce Yost
-
Patent number: 8542236Abstract: A computer-implemented method includes transferring motion information from one or more motion meshes to an animation mesh. The motion mesh represents the motion of surface features of an object. A shape mesh provides a portion of the shape of the object to the animation mesh.Type: GrantFiled: January 16, 2007Date of Patent: September 24, 2013Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8537164Abstract: Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions).Type: GrantFiled: October 10, 2011Date of Patent: September 17, 2013Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Frederic P. Pighin, Cary Phillips, Steve Sullivan
-
Patent number: 8253728Abstract: In general, one or more aspects of the subject matter described in this specification can include associating with each clip in a sequence of one or more clips a copy of a three dimensional (3D) scene that was used to create the clip, where the clip is a sequence of one or more images that depict the clip's respective 3D scene from the perspective of one or more virtual cameras. Input identifying a clip in the sequence is received. In response to the receiving, a copy of the identified clip's associated copy of the 3D scene is presented in an editor.Type: GrantFiled: February 25, 2008Date of Patent: August 28, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Max S-Han Chen, Jeffrey Bruce Yost
-
Patent number: 8207971Abstract: A system includes a computer system capable of representing one or more animated characters. The computer system includes a blendshape manager that combines multiple blendshapes to produce the animated character. The computer system also includes an expression manager to respectively adjust one or more control parameters associated with each of the plurality of blendshapes for adjusting an expression of the animated character. The computer system also includes a corrective element manager that applies one or more corrective elements to the combined blendshapes based upon at least one of the control parameters. The one or more applied corrective elements are adjustable based upon one or more of the control parameters absent the introduction of one or more additional control parameters.Type: GrantFiled: February 19, 2009Date of Patent: June 26, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Michael Koperwas, Frederic P. Pighin, Cary Phillips, Steve Sullivan, Eduardo Hueso
-
Patent number: 8199152Abstract: A computer-implemented method includes comparing content captured during one session and content captured during another session. A surface feature of an object represented in the content of one session corresponds to a surface feature of an object represented in the content of the other session. The method also includes substantially aligning the surface features of the sessions and combining the aligned content.Type: GrantFiled: April 13, 2007Date of Patent: June 12, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8144153Abstract: A computer-implemented method includes selecting a subset of images from a set of captured images. A surface feature of one object is represented in the content of each selected subset image. The method also includes decomposing the surface feature content of each selected image to produce a model of representations of the object.Type: GrantFiled: November 20, 2007Date of Patent: March 27, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco Callari
-
Patent number: 8130225Abstract: A computer-implemented method includes comparing one or more surface features to a motion model. The surface feature or surface features represent a portion of an object in an image. The method also includes identifying a representation of the object from the motion model, based upon the comparison.Type: GrantFiled: April 13, 2007Date of Patent: March 6, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Publication number: 20120002017Abstract: In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.Type: ApplicationFiled: September 9, 2011Publication date: January 5, 2012Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Steve Sullivan, Colin Davidson, Michael Sanders, Kevin Wooley
-
Patent number: 8035643Abstract: Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions).Type: GrantFiled: March 19, 2007Date of Patent: October 11, 2011Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Frederic P. Pighin, Cary Phillips, Steve Sullivan
-
Patent number: 8019137Abstract: In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.Type: GrantFiled: September 14, 2009Date of Patent: September 13, 2011Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Colin Davidson, Michael Sanders, Kevin Wooley
-
INTEGRATED ONLINE GAMING PORTAL OFFERING ENTERTAINMENT-RELATED CASUAL GAMES AND USER-GENERATED MEDIA
Publication number: 20110098108Abstract: An integrated online gaming portal offers entertainment-related casual games and/or user-generated media. The integrated online gaming portal offers a variety of features, including media-based casual games, casual games featuring user-generated content, and a media-based horoscope. Media is received from a variety of sources, including TV and movie studios, actors/actresses, sponsors, and the user him- or herself. A media-integrated game is generated by incorporating the received media into a portal game, and a user is provided with access to the generated game. In some embodiments, the generated game is hosted by the portal server, while in other embodiments the generated game is downloaded to a computer or mobile device of the user.Type: ApplicationFiled: July 7, 2010Publication date: April 28, 2011Applicant: Exponential Entertainment, Inc.Inventors: William Kuper, David M. Long, Kelly Long, Ryan Ciociola, Dana Hogenson, Steve Sullivan -
Patent number: 7848564Abstract: In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.Type: GrantFiled: March 16, 2006Date of Patent: December 7, 2010Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Colin Davidson, Michael Sanders, Kevin Wooley
-
Publication number: 20100164862Abstract: A system includes a visual data collector for collecting visual information from an image of one or more features of an object. The system also includes a physical data collector for collecting sensor information provided by at one or more sensors attached to the object. The system also includes a computer system that includes a motion data combiner for combining the visual information the sensor information. The motion data combiner is configured to determine the position of a representation of one or more of the feature in a virtual representation of the object from the combined visual information and sensor information. Various types of virtual representations may be provided from the combined information, for example, one or more poses (e.g., position and orientation) of the object may be represented.Type: ApplicationFiled: July 21, 2009Publication date: July 1, 2010Inventors: Steve Sullivan, Kevin Wooley, Brett A. Allen, Michael Sanders
-
Publication number: 20100002934Abstract: In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.Type: ApplicationFiled: September 14, 2009Publication date: January 7, 2010Inventors: Steve Sullivan, Colin Davidson
-
Patent number: 7573475Abstract: A method of creating a complementary stereoscopic image pair is described. The method includes receiving a first 2D image comprising image data, where the first 2D image is captured from a first camera location. The method also includes projecting at least a portion of the first 2D image onto computer-generated geometry. The image data has depth values associated with the computer-generated geometry. The system includes rendering, using the computer-generated geometry and a second camera location that differs from the first camera location, a second 2-D image that is stereoscopically complementary to the first 2-D image, and infilling image data that is absent from the second 2-D image.Type: GrantFiled: June 1, 2006Date of Patent: August 11, 2009Assignee: Industrial Light & MagicInventors: Steve Sullivan, Alan D. Trombla, Francesco G. Callari
-
Publication number: 20080231640Abstract: Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions).Type: ApplicationFiled: March 19, 2007Publication date: September 25, 2008Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Frederic P. Pighin, Cary Phillips, Steve Sullivan