Patents by Inventor Fredo Durand
Fredo Durand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11887248Abstract: Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.Type: GrantFiled: March 16, 2022Date of Patent: January 30, 2024Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology, The Board of Trustees of the Leland Standford Junior UniveristyInventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Dennis Park, Joshua Tenenbaum, Jiajun Wu, Fredo Durand, Vincent Sitzmann
-
Publication number: 20220414974Abstract: Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.Type: ApplicationFiled: March 16, 2022Publication date: December 29, 2022Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, The Board of Trustees of the Leland Stanford Junior UniversityInventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Dennis Park, Joshua Tenenbaum, Jiajun Wu, Fredo Durand, Vincent Sitzmann
-
Patent number: 11436839Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.Type: GrantFiled: November 2, 2018Date of Patent: September 6, 2022Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUE OF TECHNOLOGYInventors: Felix Maximilian Naser, Igor Gilitschenski, Guy Rosman, Alexander Andre Amini, Fredo Durand, Antonio Torralba, Gregory Wornell, William Freeman, Sertac Karaman, Daniela Rus
-
Publication number: 20200143177Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.Type: ApplicationFiled: November 2, 2018Publication date: May 7, 2020Inventors: Felix Maximilian NASER, Igor GILITSCHENSKI, Guy ROSMAN, Alexander Andre AMINI, Fredo DURAND, Antonio TORRALBA, Gregory WORNELL, William FREEMAN, Sertac KARAMAN, Daniela RUS
-
Patent number: 9280848Abstract: Rendering a scene with participating media is done by generating a depth map from a camera viewpoint and a shadow map from a light source, converting the shadow map using epipolar rectification to form a rectified shadow map (or generating the rectified shadow map directly), generating an approximation to visibility terms in a scattering integral, then computing a 1D min-max mipmap or other acceleration data structure for rectified shadow map rows and traversing that mipmap/data structure to find lit segments to accumulate values for the scattering integral for specific camera rays, and generating rendered pixel values that take into account accumulated values for the scattering integral for the camera rays. The scattering near an epipole of the rectified shadow map might be done using brute force ray marching when the epipole is on or near the screen. The process can be implemented using a GPU for parallel operations.Type: GrantFiled: October 24, 2011Date of Patent: March 8, 2016Assignee: Disney Enterprises Inc.Inventors: Jiawen Chen, Ilya Baran, Frédo Durand, Wojciech Jarosz
-
Publication number: 20140072228Abstract: In one embodiment, a method of amplifying temporal variation in at least two images includes converting two or more images to a transform representation. The method further includes, for each spatial position within the two or more images, examining a plurality of coefficient values. The method additionally includes calculating a first vector based on the plurality of coefficient values. The first vector can represent change from a first image to a second image of the at least two images describing deformation. The method also includes modifying the first vector to create a second vector. The method further includes calculating a second plurality of coefficients based on the second vector.Type: ApplicationFiled: September 7, 2012Publication date: March 13, 2014Applicant: Massachusetts Institute of TechnologyInventors: Michael Rubinstein, Neal Wadhwa, Fredo Durand, William T. Freeman
-
Patent number: 8451338Abstract: Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal), applicants show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, applicants show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all moving objects within a given range of motions reconstruct well.Type: GrantFiled: March 28, 2008Date of Patent: May 28, 2013Assignee: Massachusetts Institute of TechnologyInventors: Anat Levin, Peter Sand, Taeg Sang Cho, Fredo Durand, William T. Freeman
-
Patent number: 8379049Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: GrantFiled: April 12, 2012Date of Patent: February 19, 2013Assignee: EveryScape, Inc.Inventors: Byong Mok Oh, Fredo Durand
-
Publication number: 20120236019Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: ApplicationFiled: April 12, 2012Publication date: September 20, 2012Applicant: EVERYSCAPE, INC.Inventors: Byong Mok Oh, Fredo Durand
-
Patent number: 8174538Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: GrantFiled: August 13, 2009Date of Patent: May 8, 2012Assignee: EveryScape, Inc.Inventors: Byong Mok Oh, Fredo Durand
-
Publication number: 20100073403Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: ApplicationFiled: August 13, 2009Publication date: March 25, 2010Applicant: EVERYSCAPE, INC.Inventors: Byong Mok Oh, Fredo Durand
-
Patent number: 7609906Abstract: A method and system acquire and display light fields. A continuous light field is reconstructed from input samples of an input light field of a 3D scene acquired by cameras according to an acquisition parameterization. The continuous light is reparameterized according to a display parameterization and then prefiltering and sampled to produce output samples having the display parametrization. The output samples are displayed as an output light field using a 3D display device.Type: GrantFiled: April 4, 2006Date of Patent: October 27, 2009Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Wojciech Matusik, Hanspeter Pfister, Matthias Zwicker, Fredo Durand
-
Patent number: 7599555Abstract: A method compresses a set of correlated signals by first converting each signal to a sequence of integers, which are further organized as a set of bit-planes. An inverse accumulator is applied to each bit-plane to produce a bit-plane of shifted bits, which are permuted according to a predetermined permutation to produce bit-planes of permuted bits. Each bit-plane of permuted bits is partitioned into a set of blocks of bits. Syndrome bits are generated for each block of bits according to a rate-adaptive base code. Subsequently, the syndrome bits are decompressed in a decoder to recover the original correlated signals.Type: GrantFiled: March 29, 2005Date of Patent: October 6, 2009Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Morgan McGuire, Wojciech Matusik, Hanspeter Pfister, John F. Hughes, Fredo Durand
-
Publication number: 20090244300Abstract: Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal), applicants show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, applicants show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all moving objects within a given range of motions reconstruct well.Type: ApplicationFiled: March 28, 2008Publication date: October 1, 2009Applicant: Massachusetts Institute of TechnologyInventors: Anat Levin, Peter Sand, Taeg Sang Cho, Fredo Durand, Willliam T. Freeman
-
Patent number: 7593022Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: GrantFiled: December 6, 2007Date of Patent: September 22, 2009Assignee: EveryScape, Inc.Inventors: Byong Mok Oh, Fredo Durand
-
Publication number: 20080088641Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: ApplicationFiled: December 6, 2007Publication date: April 17, 2008Inventors: Byong Oh, Fredo Durand
-
Patent number: 7327374Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.Type: GrantFiled: June 23, 2003Date of Patent: February 5, 2008Inventors: Byong Mok Oh, Fredo Durand
-
Publication number: 20070229653Abstract: A method and system acquire and display light fields. A continuous light field is reconstructed from input samples of an input light field of a 3D scene acquired by cameras according to an acquisition parameterization. The continuous light is reparameterized according to a display parameterization and then prefiltering and sampled to produce output samples having the display parametrization. The output samples are displayed as an output light field using a 3D display device.Type: ApplicationFiled: April 4, 2006Publication date: October 4, 2007Inventors: Wojciech Matusik, Hanspeter Pfister, Matthias Zwicker, Fredo Durand
-
Patent number: 7199793Abstract: The invention provides a variety of tools and techniques for adding depth information to photographic images, and for editing and manipulating images that include depth information. The tools for working with such images include tools for “painting” in a depth channel, for using geometric primitives and other three-dimensional shapes to define depth in a two-dimensional image, and tools for “clone brushing” portions of an image with depth information while taking the depth information and lighting into account when copying from one portion of the image to another. The tools also include relighting tools that separate illumination information from texture information.Type: GrantFiled: May 20, 2003Date of Patent: April 3, 2007Assignee: Mok3, Inc.Inventors: Byong Mok Oh, Fredo Durand, Max Chen
-
Publication number: 20060221248Abstract: A method and system extracts a matte from images acquired of a scene. A foreground image focused at a foreground in a scene, a background image focused at a background in the scene, and a pinhole image focused on the entire scene are acquired. These three images can be acquired sequentially by a single camera, or simultaneous by three cameras. In the later case, foreground, background and pinhole sequences of images can be acquired. The pinhole image is compared to the foreground image and the background image to extract a matte representing the scene. The comparison classifies pixels in the images as foreground, background, or unknown pixels. An optimizer minimizes an error function in the form of Fourier image equations using a gradient descent method. The error function expresses pixel intensity differences.Type: ApplicationFiled: March 29, 2005Publication date: October 5, 2006Inventors: Morgan McGuire, Wojciech Matusik, Hanspeter Pfister, John Hughes, Fredo Durand