Patents by Inventor Fredo Durand

Fredo Durand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11887248
    Abstract: Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: January 30, 2024
    Assignees: Toyota Research Institute, Inc., Massachusetts Institute of Technology, The Board of Trustees of the Leland Standford Junior Univeristy
    Inventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Dennis Park, Joshua Tenenbaum, Jiajun Wu, Fredo Durand, Vincent Sitzmann
  • Publication number: 20220414974
    Abstract: Systems and methods described herein relate to reconstructing a scene in three dimensions from a two-dimensional image. One embodiment processes an image using a detection transformer to detect an object in the scene and to generate a NOCS map of the object and a background depth map; uses MLPs to relate the object to a differentiable database of object priors (PriorDB); recovers, from the NOCS map, a partial 3D object shape; estimates an initial object pose; fits a PriorDB object prior to align in geometry and appearance with the partial 3D shape to produce a complete shape and refines the initial pose estimate; generates an editable and re-renderable 3D scene reconstruction based, at least in part, on the complete shape, the refined pose estimate, and the depth map; and controls the operation of a robot based, at least in part, on the editable and re-renderable 3D scene reconstruction.
    Type: Application
    Filed: March 16, 2022
    Publication date: December 29, 2022
    Applicants: Toyota Research Institute, Inc., Massachusetts Institute of Technology, The Board of Trustees of the Leland Stanford Junior University
    Inventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon, Rares A. Ambrus, Dennis Park, Joshua Tenenbaum, Jiajun Wu, Fredo Durand, Vincent Sitzmann
  • Patent number: 11436839
    Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: September 6, 2022
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., MASSACHUSETTS INSTITUE OF TECHNOLOGY
    Inventors: Felix Maximilian Naser, Igor Gilitschenski, Guy Rosman, Alexander Andre Amini, Fredo Durand, Antonio Torralba, Gregory Wornell, William Freeman, Sertac Karaman, Daniela Rus
  • Publication number: 20200143177
    Abstract: The present disclosure provides systems and methods to detect occluded objects using shadow information to anticipate moving obstacles that are occluded behind a corner or other obstacle. The system may perform a dynamic threshold analysis on enhanced images allowing the detection of even weakly visible shadows. The system may classify an image sequence as either “dynamic” or “static”, enabling an autonomous vehicle, or other moving platform, to react and respond to a moving, yet occluded object by slowing down or stopping.
    Type: Application
    Filed: November 2, 2018
    Publication date: May 7, 2020
    Inventors: Felix Maximilian NASER, Igor GILITSCHENSKI, Guy ROSMAN, Alexander Andre AMINI, Fredo DURAND, Antonio TORRALBA, Gregory WORNELL, William FREEMAN, Sertac KARAMAN, Daniela RUS
  • Patent number: 9280848
    Abstract: Rendering a scene with participating media is done by generating a depth map from a camera viewpoint and a shadow map from a light source, converting the shadow map using epipolar rectification to form a rectified shadow map (or generating the rectified shadow map directly), generating an approximation to visibility terms in a scattering integral, then computing a 1D min-max mipmap or other acceleration data structure for rectified shadow map rows and traversing that mipmap/data structure to find lit segments to accumulate values for the scattering integral for specific camera rays, and generating rendered pixel values that take into account accumulated values for the scattering integral for the camera rays. The scattering near an epipole of the rectified shadow map might be done using brute force ray marching when the epipole is on or near the screen. The process can be implemented using a GPU for parallel operations.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: March 8, 2016
    Assignee: Disney Enterprises Inc.
    Inventors: Jiawen Chen, Ilya Baran, Frédo Durand, Wojciech Jarosz
  • Publication number: 20140072228
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images includes converting two or more images to a transform representation. The method further includes, for each spatial position within the two or more images, examining a plurality of coefficient values. The method additionally includes calculating a first vector based on the plurality of coefficient values. The first vector can represent change from a first image to a second image of the at least two images describing deformation. The method also includes modifying the first vector to create a second vector. The method further includes calculating a second plurality of coefficients based on the second vector.
    Type: Application
    Filed: September 7, 2012
    Publication date: March 13, 2014
    Applicant: Massachusetts Institute of Technology
    Inventors: Michael Rubinstein, Neal Wadhwa, Fredo Durand, William T. Freeman
  • Patent number: 8451338
    Abstract: Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal), applicants show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, applicants show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all moving objects within a given range of motions reconstruct well.
    Type: Grant
    Filed: March 28, 2008
    Date of Patent: May 28, 2013
    Assignee: Massachusetts Institute of Technology
    Inventors: Anat Levin, Peter Sand, Taeg Sang Cho, Fredo Durand, William T. Freeman
  • Patent number: 8379049
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: February 19, 2013
    Assignee: EveryScape, Inc.
    Inventors: Byong Mok Oh, Fredo Durand
  • Publication number: 20120236019
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Application
    Filed: April 12, 2012
    Publication date: September 20, 2012
    Applicant: EVERYSCAPE, INC.
    Inventors: Byong Mok Oh, Fredo Durand
  • Patent number: 8174538
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Grant
    Filed: August 13, 2009
    Date of Patent: May 8, 2012
    Assignee: EveryScape, Inc.
    Inventors: Byong Mok Oh, Fredo Durand
  • Publication number: 20100073403
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Application
    Filed: August 13, 2009
    Publication date: March 25, 2010
    Applicant: EVERYSCAPE, INC.
    Inventors: Byong Mok Oh, Fredo Durand
  • Patent number: 7609906
    Abstract: A method and system acquire and display light fields. A continuous light field is reconstructed from input samples of an input light field of a 3D scene acquired by cameras according to an acquisition parameterization. The continuous light is reparameterized according to a display parameterization and then prefiltering and sampled to produce output samples having the display parametrization. The output samples are displayed as an output light field using a 3D display device.
    Type: Grant
    Filed: April 4, 2006
    Date of Patent: October 27, 2009
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Wojciech Matusik, Hanspeter Pfister, Matthias Zwicker, Fredo Durand
  • Patent number: 7599555
    Abstract: A method compresses a set of correlated signals by first converting each signal to a sequence of integers, which are further organized as a set of bit-planes. An inverse accumulator is applied to each bit-plane to produce a bit-plane of shifted bits, which are permuted according to a predetermined permutation to produce bit-planes of permuted bits. Each bit-plane of permuted bits is partitioned into a set of blocks of bits. Syndrome bits are generated for each block of bits according to a rate-adaptive base code. Subsequently, the syndrome bits are decompressed in a decoder to recover the original correlated signals.
    Type: Grant
    Filed: March 29, 2005
    Date of Patent: October 6, 2009
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Morgan McGuire, Wojciech Matusik, Hanspeter Pfister, John F. Hughes, Fredo Durand
  • Publication number: 20090244300
    Abstract: Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal), applicants show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, applicants show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all moving objects within a given range of motions reconstruct well.
    Type: Application
    Filed: March 28, 2008
    Publication date: October 1, 2009
    Applicant: Massachusetts Institute of Technology
    Inventors: Anat Levin, Peter Sand, Taeg Sang Cho, Fredo Durand, Willliam T. Freeman
  • Patent number: 7593022
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Grant
    Filed: December 6, 2007
    Date of Patent: September 22, 2009
    Assignee: EveryScape, Inc.
    Inventors: Byong Mok Oh, Fredo Durand
  • Publication number: 20080088641
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Application
    Filed: December 6, 2007
    Publication date: April 17, 2008
    Inventors: Byong Oh, Fredo Durand
  • Patent number: 7327374
    Abstract: The invention provides tools and techniques for clone brushing pixels in an image while accounting for inconsistencies in apparent depth and orientation within the image. The techniques do not require any depth information to be present in the image, and the data structure of the image is preserved. The techniques allow for color compensation between source and destination regions. A snapping technique is also provided to facilitate increased accuracy in selecting source and destination positions.
    Type: Grant
    Filed: June 23, 2003
    Date of Patent: February 5, 2008
    Inventors: Byong Mok Oh, Fredo Durand
  • Publication number: 20070229653
    Abstract: A method and system acquire and display light fields. A continuous light field is reconstructed from input samples of an input light field of a 3D scene acquired by cameras according to an acquisition parameterization. The continuous light is reparameterized according to a display parameterization and then prefiltering and sampled to produce output samples having the display parametrization. The output samples are displayed as an output light field using a 3D display device.
    Type: Application
    Filed: April 4, 2006
    Publication date: October 4, 2007
    Inventors: Wojciech Matusik, Hanspeter Pfister, Matthias Zwicker, Fredo Durand
  • Patent number: 7199793
    Abstract: The invention provides a variety of tools and techniques for adding depth information to photographic images, and for editing and manipulating images that include depth information. The tools for working with such images include tools for “painting” in a depth channel, for using geometric primitives and other three-dimensional shapes to define depth in a two-dimensional image, and tools for “clone brushing” portions of an image with depth information while taking the depth information and lighting into account when copying from one portion of the image to another. The tools also include relighting tools that separate illumination information from texture information.
    Type: Grant
    Filed: May 20, 2003
    Date of Patent: April 3, 2007
    Assignee: Mok3, Inc.
    Inventors: Byong Mok Oh, Fredo Durand, Max Chen
  • Publication number: 20060221248
    Abstract: A method and system extracts a matte from images acquired of a scene. A foreground image focused at a foreground in a scene, a background image focused at a background in the scene, and a pinhole image focused on the entire scene are acquired. These three images can be acquired sequentially by a single camera, or simultaneous by three cameras. In the later case, foreground, background and pinhole sequences of images can be acquired. The pinhole image is compared to the foreground image and the background image to extract a matte representing the scene. The comparison classifies pixels in the images as foreground, background, or unknown pixels. An optimizer minimizes an error function in the form of Fourier image equations using a gradient descent method. The error function expresses pixel intensity differences.
    Type: Application
    Filed: March 29, 2005
    Publication date: October 5, 2006
    Inventors: Morgan McGuire, Wojciech Matusik, Hanspeter Pfister, John Hughes, Fredo Durand