Patents by Inventor Robert W. Sumner
Robert W. Sumner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11683662Abstract: A storytelling system includes location-based content (LBC) servers and LBC effects databases for use in a real-world story venue, and a computing platform communicatively coupled to those LBC servers and databases. A processor of the computing platform executes software code to receive story selection data from a user device, obtain a story template including story arcs each associated with one or more LBC effect(s) that corresponds to the story selection data, and determine, using a location of the user device, one of the story arcs as an active story arc. The software code also identifies an LBC interaction zone for the LBC effect(s), designates one LBC server and one LBC effects database for supporting the active story arc, and distributes one or more of the LBC effect(s) to the designated database. The designated server enables instantiation of the one or more of the LBC effect(s) at the LBC interaction zone.Type: GrantFiled: June 23, 2022Date of Patent: June 20, 2023Assignee: Disney Enterprises, Inc.Inventors: Manuel Braunschweiler, Steven Poulakos, Mattia Ryffel, Robert W. Sumner
-
Publication number: 20230177784Abstract: An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display.Type: ApplicationFiled: January 27, 2023Publication date: June 8, 2023Inventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
-
Patent number: 11628374Abstract: A virtual puppeteering system includes a portable device including a camera, a display, a hardware processor, and a system memory storing an object animation software code. The hardware processor is configured to execute the object animation software code to, using the camera, generate an image in response to receiving an activation input, using the display, display the image, and receive a selection input selecting an object shown in the image. The hardware processor is further configured to execute the object animation software code to determine a distance separating the selected object from the portable device, receive an animation input, identify, based on the selected object and the received animation input, a movement for animating the selected object, generate an animation of the selected object using the determined distance and the identified movement, and render the animation of the selected object.Type: GrantFiled: June 30, 2020Date of Patent: April 18, 2023Assignees: Disney Enterprises, Inc., ETH ZürichInventors: Raphael Anderegg, Loic Ciccone, Robert W. Sumner
-
Publication number: 20220322045Abstract: A storytelling system includes location-based content (LBC) servers and LBC effects databases for use in a real-world story venue, and a computing platform communicatively coupled to those LBC servers and databases. A processor of the computing platform executes software code to receive story selection data from a user device, obtain a story template including story arcs each associated with one or more LBC effect(s) that corresponds to the story selection data, and determine, using a location of the user device, one of the story arcs as an active story arc. The software code also identifies an LBC interaction zone for the LBC effect(s), designates one LBC server and one LBC effects database for supporting the active story arc, and distributes one or more of the LBC effect(s) to the designated database. The designated server enables instantiation of the one or more of the LBC effect(s) at the LBC interaction zone.Type: ApplicationFiled: June 23, 2022Publication date: October 6, 2022Inventors: Manuel Braunschweiler, Steven Poulakos, Mattia Ryffel, Robert W. Sumner
-
Patent number: 11445332Abstract: A storytelling system includes location-based content (LBC) servers and LBC effects databases for use in a real-world story venue, and a computing platform communicatively coupled to those LBC servers and databases. A processor of the computing platform executes software code to receive story selection data from a user device, obtain a story template including story arcs each associated with one or more LBC effect(s) that corresponds to the story selection data, and determine, using a location of the user device, one of the story arcs as an active story arc. The software code also identifies an LBC interaction zone for the LBC effect(s), designates one LBC server and one LBC effects database for supporting the active story arc, and distributes one or more of the LBC effect(s) to the designated database. The designated server enables instantiation of the one or more of the LBC effect(s) at the LBC interaction zone.Type: GrantFiled: December 10, 2020Date of Patent: September 13, 2022Assignee: Disney Enterprises, Inc.Inventors: Manuel Braunschweiler, Steven Poulakos, Mattia Ryffel, Robert W. Sumner
-
Patent number: 11335051Abstract: Techniques for animation are provided. A first trajectory for a first element in a first animation is determined. A first approximation is generated based on the first trajectory, and the first approximation is modified based on an updated state of the first element. The first trajectory is then refined based on the modified first approximation.Type: GrantFiled: March 11, 2020Date of Patent: May 17, 2022Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Robert W. Sumner, Alba Maria Rios Rodriguez, Maurizio Nitti, Mattia Ryffel, Steven C. Poulakos
-
Publication number: 20210274314Abstract: A storytelling system includes location-based content (LBC) servers and LBC effects databases for use in a real-world story venue, and a computing platform communicatively coupled to those LBC servers and databases. A processor of the computing platform executes software code to receive story selection data from a user device, obtain a story template including story arcs each associated with one or more LBC effect(s) that corresponds to the story selection data, and determine, using a location of the user device, one of the story arcs as an active story arc. The software code also identifies an LBC interaction zone for the LBC effect(s), designates one LBC server and one LBC effects database for supporting the active story arc, and distributes one or more of the LBC effect(s) to the designated database. The designated server enables instantiation of the one or more of the LBC effect(s) at the LBC interaction zone.Type: ApplicationFiled: December 10, 2020Publication date: September 2, 2021Inventors: Manuel Braunschweiler, Steven Poulakos, Mattia Ryffel, Robert W. Sumner
-
Publication number: 20210125393Abstract: Techniques for animation are provided. A first trajectory for a first element in a first animation is determined. A first approximation is generated based on the first trajectory, and the first approximation is modified based on an updated state of the first element.Type: ApplicationFiled: March 11, 2020Publication date: April 29, 2021Inventors: Robert W. SUMNER, Alba Maria RIOS RODRIGUEZ, Maurizio NITTI, Mattia RYFFEL, Steven C. POULAKOS
-
Patent number: 10916046Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.Type: GrantFiled: February 28, 2019Date of Patent: February 9, 2021Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
-
Publication number: 20210008461Abstract: A virtual puppeteering system includes a portable device including a camera, a display, a hardware processor, and a system memory storing an object animation software code. The hardware processor is configured to execute the object animation software code to, using the camera, generate an image in response to receiving an activation input, using the display, display the image, and receive a selection input selecting an object shown in the image. The hardware processor is further configured to execute the object animation software code to determine a distance separating the selected object from the portable device, receive an animation input, identify, based on the selected object and the received animation input, a movement for animating the selected object, generate an animation of the selected object using the determined distance and the identified movement, and render the animation of the selected object.Type: ApplicationFiled: June 30, 2020Publication date: January 14, 2021Inventors: Raphael Anderegg, Loic Ciccone, Robert W. Sumner
-
Patent number: 10885708Abstract: An automated costume augmentation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to provide an image including a posed figure to an artificial neural network (ANN), receive from the ANN a 2D skeleton data including joint positions corresponding to the posed figure, and determine a 3D pose corresponding to the posed figure using an optimization algorithm applied to the skeleton data. The software code further identifies one or more proportion(s) of the posed figure based on the skeleton data, determines bone directions corresponding to the posed figure using another optimization algorithm applied to the 3D pose, parameterizes a costume for the posed figure based on the 3D pose, the proportion(s), and the bone directions, and outputs an enhanced image including the posed figure augmented with the fitted costume for rendering on a display.Type: GrantFiled: October 16, 2018Date of Patent: January 5, 2021Assignees: Disney Enterprises, Inc., ETH ZURICHInventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
-
Publication number: 20200279428Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.Type: ApplicationFiled: February 28, 2019Publication date: September 3, 2020Inventors: Martin GUAY, Dominik Tobias BORER, Ahmet Cengiz ÖZTIRELI, Robert W. SUMNER, Jakob Joachim BUHMANN
-
Publication number: 20200118333Abstract: An automated costume augmentation system includes a computing platform having a hardware processor and a system memory storing a software code. The hardware processor executes the software code to provide an image including a posed figure to an artificial neural network (ANN), receive from the ANN a 2D skeleton data including joint positions corresponding to the posed figure, and determine a 3D pose corresponding to the posed figure using an optimization algorithm applied to the skeleton data. The software code further identifies one or more proportion(s) of the posed figure based on the skeleton data, determines bone directions corresponding to the posed figure using another optimization algorithm applied to the 3D pose, parameterizes a costume for the posed figure based on the 3D pose, the proportion(s), and the bone directions, and outputs an enhanced image including the posed figure augmented with the fitted costume for rendering on a display.Type: ApplicationFiled: October 16, 2018Publication date: April 16, 2020Inventors: Martin Guay, Gökcen Cimen, Christoph Maurhofer, Mattia Ryffel, Robert W. Sumner
-
Patent number: 10553009Abstract: Techniques for generating locomotion data for animating a virtual quadruped model, starting at an origin point and travelling to a destination point along a defined path. A virtual skeletal structure for the virtual quadruped model is analyzed to identify a torso region, limbs each ending in a respective end effector, and limb attributes. A predefined locomotion template for virtual quadruped characters is retrieved and mapped to the virtual quadruped model by aligning the torso region and the plurality of limbs of the virtual skeletal structure for the virtual quadruped with a second torso region and a second plurality of limbs of the predefined locomotion template. Locomotion data is generated for the virtual quadruped model based on the defined path and by upscaling the mapped predefined locomotion template, based at least on the set of limb attributes determined by analyzing the virtual skeletal structure for the virtual quadruped model.Type: GrantFiled: March 15, 2018Date of Patent: February 4, 2020Assignee: Disney Enterprises, Inc.Inventors: Martin Guay, Moritz Geilinger, Stelian Coros, Ye Yuan, Robert W. Sumner
-
Publication number: 20190287288Abstract: Techniques for generating locomotion data for animating a virtual quadruped model, starting at an origin point and travelling to a destination point along a defined path. A virtual skeletal structure for the virtual quadruped model is analyzed to identify a torso region, limbs each ending in a respective end effector, and limb attributes. A predefined locomotion template for virtual quadruped characters is retrieved and mapped to the virtual quadruped model by aligning the torso region and the plurality of limbs of the virtual skeletal structure for the virtual quadruped with a second torso region and a second plurality of limbs of the predefined locomotion template. Locomotion data is generated for the virtual quadruped model based on the defined path and by upscaling the mapped predefined locomotion template, based at least on the set of limb attributes determined by analyzing the virtual skeletal structure for the virtual quadruped model.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Inventors: Martin GUAY, Moritz GEILINGER, Stelian COROS, Ye YUAN, Robert W. SUMNER
-
Patent number: 9164723Abstract: Techniques for displaying content using an augmented reality device are described. Embodiments provide a visual scene for display, the visual scene captured using one or more camera devices of the augmented reality device. Embodiments adjust physical display geometry characteristics of the visual scene to correct for optimal projection. Additionally, illumination characteristics of the visual scene are modified based on environmental illumination data to improve realism of the visual scene when it is displayed. Embodiments further adjust display characteristics of the visual scene to improve tone mapping output. The adjusted visual scene is then output for display on the augmented reality device.Type: GrantFiled: June 30, 2011Date of Patent: October 20, 2015Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Stefan C. Geiger, Wojciech Jarosz, Manuel J. Lang, Kenneth J. Mitchell, Derek Nowrouzezahrai, Robert W. Sumner, Thomas Williams
-
Patent number: 9142056Abstract: Rendering 3D paintings can be done by compositing brush strokes embedded in space. Image elements are rendered into an image representable by a pixel array wherein at least some of the image elements correspond to simulated painting strokes. A method may include determining stroke positions in a 3D space, determining stroke orders, and for each pixel to be addressed, determining a pixel color value by determining strokes intersections with a view ray for that pixel, determining a depth order and a stroke order for intersecting fragments, each fragment having a color, alpha value, depth, and stroke order, assigning an intermediate color to each of the fragments, corresponding to a compositing of nearby fragments in stroke order, and assigning a color to the pixel that corresponds to a compositing of the fragments using the intermediate colors assigned to the fragments. The compositing may be done in depth order.Type: GrantFiled: May 18, 2012Date of Patent: September 22, 2015Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)Inventors: Ilya Baran, Johannes Schmid, Markus Gross, Robert W. Sumner
-
Publication number: 20130002698Abstract: Techniques for displaying content using an augmented reality device are described. Embodiments provide a visual scene for display, the visual scene captured using one or more camera devices of the augmented reality device. Embodiments adjust physical display geometry characteristics of the visual scene to correct for optimal projection. Additionally, illumination characteristics of the visual scene are modified based on environmental illumination data to improve realism of the visual scene when it is displayed. Embodiments further adjust display characteristics of the visual scene to improve tone mapping output. The adjusted visual scene is then output for display on the augmented reality device.Type: ApplicationFiled: June 30, 2011Publication date: January 3, 2013Applicant: DISNEY ENTERPRISES, INC.Inventors: Stefan C. Geiger, Wojciech Jarosz, Manuel J. Lang, Kenneth J. Mitchell, Derek Nowrouzezahrai, Robert W. Sumner, Thomas Williams