Patents by Inventor Paul Beardsley
Paul Beardsley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9067320Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of a real-world video texture. Mobile robots with controllable color may generate visual representations of the real-world video texture to create an effect like fire, sunlight on water, leaves fluttering in sunlight, a wheat field swaying in the wind, crowd flow in a busy city, and clouds in the sky. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic and holonomic robots to guarantee smooth and collision-free motions.Type: GrantFiled: April 27, 2012Date of Patent: June 30, 2015Assignee: Disney Enterprises, Inc.Inventors: Paul Beardsley, Javier Alonso Mora, Andreas Breitenmoser, Federico Perazzi, Alexander Hornung
-
Patent number: 9036898Abstract: High-quality passive performance capture using anchor frames derives in part from a robust tracking algorithm. Tracking is performed in image space and uses an integrated result to propagate a single reference mesh to performances represented in the image space. Image-space tracking is computed for each camera in multi-camera setups. Thus, multiple hypotheses can be propagated forward in time. If one flow computation develops inaccuracies, the others can compensate. This yields results superior to mesh-based tracking techniques because image data typically contains much more detail, facilitating more accurate tracking. Moreover, the problem of error propagation due to inaccurate tracking in image space can be dealt with in the same domain in which it occurs. Therefore, there is less complication of distortion due to parameterization, a technique used frequently in mesh processing algorithms.Type: GrantFiled: November 2, 2011Date of Patent: May 19, 2015Assignee: DISNEY ENTERPRISES, INC.Inventors: Thabo Beeler, Bernd Bickel, Fabian Hahn, Derek Bradley, Paul Beardsley, Bob Sumner, Markus Gross
-
Publication number: 20150078620Abstract: Provided is an aircraft having a spherical body which generates buoyancy or which may generate buoyancy when filled with gas, wherein the aircraft further comprises four actuation units arranged on the surface of the body for movement of the aircraft in a translation and/or rotation through air, and at least one camera arranged on or in the surface of the body. Further provided is a method for providing optical information to a person in the environment of a flying aircraft, a method for providing optical information about an object and/or surveying of an object, a method for transmission of acoustic information and a method for observing or tracking an object.Type: ApplicationFiled: April 19, 2013Publication date: March 19, 2015Applicant: ETH ZurichInventors: Anton Ledergerber, Andreas Schaffner, Claudio Ruch, Daniel Meier, Johannes Weichart, Lukas Gasser, Luca Muri, Miro Kach, Matthias Krebs, Matthias Burri, Lukas Mosimann, Nicolas Vuilliomenet, Randy Michaud, Simon Laube, Paul Beardsley, Javier Alonso Mora, Roland Yves Siegwart, Stefan Leutenegger, Konrad Rudin
-
Publication number: 20140368615Abstract: To generate a pixel-accurate depth map, data from a range-estimation sensor (e.g., a time-of flight sensor) is combined with data from multiple cameras to produce a high-quality depth measurement for pixels in an image. To do so, a depth measurement system may use a plurality of cameras mounted on a support structure to perform a depth hypothesis technique to generate a first depth-support value. Furthermore, the apparatus may include a range-estimation sensor which generates a second depth-support value. In addition, the system may project a 3D point onto the auxiliary cameras and compare the color of the associated pixel in the auxiliary camera with the color of the pixel in reference camera to generate a third depth-support value. The system may combine these support values for each pixel in an image to determine respective depth values. Using these values, the system may generate a depth map for the image.Type: ApplicationFiled: June 12, 2013Publication date: December 18, 2014Inventors: Jeroen van Baar, Paul A. Beardsley, Marc Pollefeys, Markus Gross
-
Publication number: 20140292770Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.Type: ApplicationFiled: March 5, 2014Publication date: October 2, 2014Applicant: Disney Enterprises, Inc.Inventors: Paul BEARDSLEY, Javier ALONSO MORA, Andreas BREITENMOSER, Martin RUFLI, Roland SIEGWART, Iain MATTHEWS, Katsu YAMANE
-
Publication number: 20140198221Abstract: A three-dimensional coordinate position of a calibration device is determined. Further, a code is emitted to an image capture device. The code indicates the three-dimensional coordinate position of the calibration device. In addition, an image of light emitted from the calibration device is captured. The light includes the code. An image capture device three-dimensional coordinate position of the calibration device is calibrated according to the real world three-dimensional coordinate position of the calibration device indicated by the code.Type: ApplicationFiled: January 15, 2013Publication date: July 17, 2014Inventors: Paul Beardsley, Anselm Grundhoefer, Daniel Reetz
-
Patent number: 8723872Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.Type: GrantFiled: June 8, 2011Date of Patent: May 13, 2014Assignee: Disney Enterprises, Inc.Inventors: Paul Beardsley, Javier Alonso Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, Iain Matthews, Katsu Yamane
-
Patent number: 8721132Abstract: A visual display is provided through the combined effect of many devices that individually illuminate in response to wind flow. The devices are distributed at different locations within a three-dimensional space, so as to provide an overall illumination effect throughout the space that visually indicates wind (air) or other fluid flowing through the space. Each device can include a housing, at least one light source, a sensor system, and a device controller. The sensor system can include any of various types of sensor subsystems, each having one or more sensors, but includes at least a flow sensor subsystem. The device controller is configured to activate the light source in response to the sensor system detecting a change in an environmentally-related input, such as air flow, sensed by the sensor system.Type: GrantFiled: November 2, 2011Date of Patent: May 13, 2014Assignee: Disney Enterprises, Inc.Inventors: Paul A. Beardsley, Martin M. Rufli, Pirmin Mattmann
-
Patent number: 8670606Abstract: A computer-implemented method for generating a three-dimensional model of an object. The method includes generating a coarse geometry mesh of the object; calculating an optimization for the coarse geometry mesh based on photometric consistency and surface consistency associated with the coarse geometry mesh; and refining the coarse geometry mesh with the optimization to generate the three-dimensional model for the object.Type: GrantFiled: October 4, 2010Date of Patent: March 11, 2014Assignee: Disney Enterprises, Inc.Inventors: Thabo Dominik Beeler, Bernd Bickel, Markus Gross, Robert Sumner, Paul Beardsley
-
Patent number: 8659643Abstract: There is provided a system and method for counting riders arbitrarily positioned in a vehicle. There is provided a method comprising receiving, from at least one camera filtered to capture non-visible light, video data corresponding to the vehicle passing through a light source filtered for non-visible light, converting the video data into a 3D height map, and analyzing the 3D height map to determine a number of riders in the vehicle. The camera and light source may be mounted in a permanent position using a gantry or another suitable system where the vehicle travels across the camera and light system in a determined manner, for example through a vehicle track. Multiple cameras may be used to increase detection accuracy. To detect persons in the 3D height map, the analysis may search for height patterns indicating heads and shoulders of persons, compare against height map templates, or use machine-learning methods.Type: GrantFiled: January 18, 2011Date of Patent: February 25, 2014Assignee: Disney Enterprises, Inc.Inventors: Christopher Purvis, Paul Beardsley
-
Patent number: 8633926Abstract: Techniques are provided for mesoscopic geometry modulation. A first set of mesoscopic details associated with an object is determined by applying a filter to an image of an object. Mesoscopic details included in the first set of mesoscopic details are detectable in the image of the object and are not detectable when generating a coarse geometry reconstruction of the object. A three-dimensional model for the object is generated by modulating the coarse geometry with the first set of mesoscopic details.Type: GrantFiled: January 18, 2010Date of Patent: January 21, 2014Assignee: Disney Enterprises, Inc.Inventors: Thabo Dominik Beeler, Bernd Bickel, Markus Gross, Robert Sumner, Paul Beardsley
-
Publication number: 20130106312Abstract: A visual display is provided through the combined effect of many devices that individually illuminate in response to wind flow. The devices are distributed at different locations within a three-dimensional space, so as to provide an overall illumination effect throughout the space that visually indicates wind (air) or other fluid flowing through the space. Each device can include a housing, at least one light source, a sensor system, and a device controller. The sensor system can include any of various types of sensor subsystems, each having one or more sensors, but includes at least a flow sensor subsystem. The device controller is configured to activate the light source in response to the sensor system detecting a change in an environmentally-related input, such as air flow, sensed by the sensor system.Type: ApplicationFiled: November 2, 2011Publication date: May 2, 2013Applicant: DISNEY ENTERPRISES, INC.Inventors: Paul A. Beardsley, Martin M. Rufli, Pirmin Mattmann
-
Publication number: 20120313937Abstract: Techniques are provided to model hair and skin. Multiscopic images are received that depict at least part of a subject having hair. The multiscopic images are analyzed to determine hairs depicted. Two-dimensional hair segments are generated that represent the hairs. Three-dimensional hair segments are generated based on the two-dimensional hair segments. A three-dimensional model of skin is generated based on the three-dimensional hair segments.Type: ApplicationFiled: August 17, 2012Publication date: December 13, 2012Applicant: DISNEY ENTERPRISES, INC.Inventors: Thabo D. Beeler, Paul Beardsley, Robert Sumner, Bernd Bickel, Steve Marschner
-
Publication number: 20120206473Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of a real-world video texture. Mobile robots with controllable color may generate visual representations of the real-world video texture to create an effect like fire, sunlight on water, leaves fluttering in sunlight, a wheat field swaying in the wind, crowd flow in a busy city, and clouds in the sky. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic and holonomic robots to guarantee smooth and collision-free motions.Type: ApplicationFiled: April 27, 2012Publication date: August 16, 2012Inventors: Paul BEARDSLEY, Javier Alonso MORA, Andreas BREITENMOSER, Federico PERAZZI, Alexander HORNUNG
-
Publication number: 20120182390Abstract: There is provided a system and method for counting riders arbitrarily positioned in a vehicle. There is provided a method comprising receiving, from at least one camera filtered to capture non-visible light, video data corresponding to the vehicle passing through a light source filtered for non-visible light, converting the video data into a 3D height map, and analyzing the 3D height map to determine a number of riders in the vehicle. The camera and light source may be mounted in a permanent position using a gantry or another suitable system where the vehicle travels across the camera and light system in a determined manner, for example through a vehicle track. Multiple cameras may be used to increase detection accuracy. To detect persons in the 3D height map, the analysis may search for height patterns indicating heads and shoulders of persons, compare against height map templates, or use machine-learning methods.Type: ApplicationFiled: January 18, 2011Publication date: July 19, 2012Applicant: DISNEY ENTERPRISES, INC.Inventors: Christopher Purvis, Paul Beardsley
-
Patent number: 8107721Abstract: A camera acquires a set of coded images and a set of flash images of a semi-specular object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source. 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images. Surface normals are obtained for the 3D points from photometric stereo on the set of flash images. The 3D coordinates, 2D silhouettes and surface normals are compared with a known 3D model of the object to determine the pose of the object.Type: GrantFiled: May 29, 2008Date of Patent: January 31, 2012Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Paul A. Beardsley, Moritz Baecher
-
Publication number: 20110304633Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.Type: ApplicationFiled: June 8, 2011Publication date: December 15, 2011Inventors: Paul Beardsley, Javier Alonso Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, Iain Matthews, Katsu Yamane
-
Publication number: 20110175912Abstract: A computer-implemented method for generating a three-dimensional model of an object. The method includes generating a coarse geometry mesh of the object; calculating an optimization for the coarse geometry mesh based on photometric consistency and surface consistency associated with the coarse geometry mesh; and refining the coarse geometry mesh with the optimization to generate the three-dimensional model for the object.Type: ApplicationFiled: October 4, 2010Publication date: July 21, 2011Inventors: Thabo Dominik BEELER, Bernd BICKEL, Markus GROSS, Robert SUMNER, Paul BEARDSLEY
-
Patent number: 7711182Abstract: Surface normals and other 3D shape descriptors are determined for a specular or hybrid specular-diffuse object. A camera records an image of a smoothly spatially-varying pattern being reflected in the surface of the object, with the pattern placed at an initial position. The camera then records multiple images of the pattern undergoing a sequence of subsequent displacements to a final position distinct from the initial position. For a pixel in the images, the pattern displacement that corresponds to the minimum difference between the pixel value in the initial image and any of the final images is determined. The incident ray that strikes the surface of the object at the point being imaged by the pixel is then determined using the determined pattern displacement. The surface normal at that same surface point is then determined using the determined incident ray.Type: GrantFiled: August 1, 2006Date of Patent: May 4, 2010Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventor: Paul A. Beardsley
-
Publication number: 20090297020Abstract: A camera acquires a set of coded images and a set of flash images of a semi-specular object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source. 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images. Surface normals are obtained for the 3D points from photometric stereo on the set of flash images. The 3D coordinates, 2D silhouettes and surface normals are compared with a known 3D model of the object to determine the pose of the object.Type: ApplicationFiled: May 29, 2008Publication date: December 3, 2009Inventors: Paul A. Beardsley, Moritz Baecher