Patents by Inventor William T. Freeman

William T. Freeman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10288420
    Abstract: In one embodiment, a method comprises projecting, from a projector, a diffused on an object. The method further includes capturing, with a first camera in a particular location, a reference image of the object while the diffused is projected on the object. The method further includes capturing, with a second camera positioned in the particular location, a test image of the object while the diffused is projected on the object. The method further includes comparing speckles in the reference image to the test image. The projector, first camera and second camera are removably provided to and positioned in a site of the object.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: May 14, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: YiChang Shih, Myers Abraham Davis, Samuel Wiliam Hasinoff, Frederic Durand, William T. Freeman
  • Publication number: 20190095698
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger
  • Patent number: 10242427
    Abstract: Geometries of the structures and objects deviate from their idealized models, while not always visible to the naked eye. Embodiments of the present invention reveal and visualize such subtle geometric deviations, which can contain useful, surprising information. In an embodiment of the present invention, a method can include fitting a model of a geometry to an input image, matting a region of the input image according to the model based on a sampling function, generating a deviation function based on the matted region, extrapolating the deviation function to an image wide warping field, and generating an output image by warping the input image according to the warping. In an embodiment of the present invention, Deviation Magnification inputs takes a still image or frame, fits parametric models to objects of interest, and generates an output image exaggerating departures from ideal geometries.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: March 26, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Neal Wadhwa, Tali Dekel, Donglai Wei, Frederic Pierre Durand, William T. Freeman
  • Patent number: 10217218
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: February 26, 2019
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Patent number: 10217187
    Abstract: The method for dynamic video magnification magnifies small motions occurring simultaneously within large motions. The method involves selecting a region of interest from a video for magnification. The region of interest is warped to obtain a stabilized sequence of frames that discounts large motions. Each frame of the stabilized sequence is decomposed to a foreground layer, a background layer, and an alpha matte layer, and each of the foreground and alpha matte layers is magnified. Then a magnified sequence is generated from the magnified layers using matte inversion. Any image holes in the magnified sequence are filled in using texture synthesis. Finally, the magnified sequence is de-warped to the original space-time coordinates.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: February 26, 2019
    Assignees: QATAR FOUNDATION FOR EDUCATION, SCIENCE AND IMMUNITY DEVELOPMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Mohamed Abdelaziz A. Mohamed Elgharib, Mohamed M. Hefeeda, William T. Freeman, Frederic Durand
  • Publication number: 20190035086
    Abstract: A method and corresponding apparatus for measuring object motion using camera images may include measuring a global optical flow field of a scene. The scene may include target and reference objects captured in an image sequence. Motion of a camera used to capture the image sequence may be determined relative to the scene by measuring an apparent, sub-pixel motion of the reference object with respect to an imaging plane of the camera. Motion of the target object corrected for the camera motion may be calculated based on the optical flow field of the scene and on the apparent, sub-pixel motion of the reference object with respect to the imaging plane of the camera. Embodiments may enable measuring vibration of structures and objects from long distance in relatively uncontrolled settings, with or without accelerometers, with high signal-to-noise ratios.
    Type: Application
    Filed: February 28, 2017
    Publication date: January 31, 2019
    Inventors: Oral Buyukozturk, William T. Freeman, Frederic Durand, Myers Abraham Davis, Neal Wadhwa, Justin G. Chen
  • Publication number: 20180352208
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Application
    Filed: June 5, 2018
    Publication date: December 6, 2018
    Inventors: Wojciech Matusik, Piotr K. Didyk, Ph.D., William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Patent number: 10129658
    Abstract: A method of recovering audio signals and corresponding apparatus according to an embodiment of the present invention using video or other sequence of images enables recovery of sound that causes vibrations of a surface. An embodiment method includes combining representations of local motions of a surface to produce a global motion signal of the surface. The local motions are captured in a series of images of features of the surface, and the global motion signal represents a sound within an environment in which the surface is located. Some embodiments compare representations of local motions of a surface to determine which motions are in-phase or out-of-phase with each other, enabling visualization of surface vibrational modes. Embodiments are passive, as compared to other forms of remote audio recovery that employ active sensing, such as laser microphone systems. Example applications for the embodiments include espionage and surveillance.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: November 13, 2018
    Assignee: Massachusetts Institute of Technology
    Inventors: Michael Rubinstein, Myers Abraham Davis, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180268543
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180225803
    Abstract: The method for dynamic video magnification magnifies small motions occurring simultaneously within large motions. The method involves selecting a region of interest from a video for magnification. The region of interest is warped to obtain a stabilized sequence of frames that discounts large motions. Each frame of the stabilized sequence is decomposed to a foreground layer, a background layer, and an alpha matte layer, and each of the foreground and alpha matte layers is magnified. Then a magnified sequence is generated from the magnified layers using matte inversion. Any image holes in the magnified sequence are filled in using texture synthesis. Finally, the magnified sequence is de-warped to the original space-time coordinates.
    Type: Application
    Filed: June 3, 2016
    Publication date: August 9, 2018
    Applicants: QATAR FOUNDATION FOR EDUCATION, SCIENCE AND COMMUNITY DEVELOPMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: MOHAMED ABDELAZIZ A. MOHAMED ELGHARIB, MOHAMED M. HEFEEDA, WILLIAM T. FREEMAN, FREDERIC DURAND
  • Patent number: 10037609
    Abstract: A method and corresponding device for identifying operational mode shapes of an object in a video stream includes extracting pixel-wise Eulerian motion signals of an object from an undercomplete representation of frames within a video stream. Pixel-wise Eulerian motion signals are downselected to produce a representative set of Eulerian motion signals of the object. Operational mode shapes of the object are identified based on the representative set. Resonant frequencies can also be identified. Embodiments enable vibrational characteristics of objects to be determined using video in near real time.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: July 31, 2018
    Assignee: Massachusetts Institute of Technology
    Inventors: Justin Gejune Chen, Oral Buyukozturk, William T. Freeman, Frederic Pierre Durand, Myers Abraham Davis, Neal Wadhwa
  • Patent number: 10007986
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 26, 2018
    Assignees: MASSACHUSETTS INSTITUTE OF TECHNOLOGY, QUANTA COMPUTER, INC.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180096482
    Abstract: An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
    Type: Application
    Filed: November 21, 2017
    Publication date: April 5, 2018
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Publication number: 20180061063
    Abstract: A method and corresponding apparatus for measuring object motion using camera images may include measuring a global optical flow field of a scene. The scene may include target and reference objects captured in an image sequence. Motion of a camera used to capture the image sequence may be determined relative to the scene by measuring an apparent, sub-pixel motion of the reference object with respect to an imaging plane of the camera. Motion of the target object corrected for the camera motion may be calculated based on the optical flow field of the scene and on the apparent, sub-pixel motion of the reference object with respect to the imaging plane of the camera. Embodiments may enable measuring vibration of structures and objects from long distance in relatively uncontrolled settings, with or without accelerometers, with high signal-to-noise ratios.
    Type: Application
    Filed: February 28, 2017
    Publication date: March 1, 2018
    Inventors: Oral Buyukozturk, William T. Freeman, Frederic Durand, Myers Abraham Davis, Neal Wadhwa, Justin G. Chen
  • Publication number: 20180047160
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Application
    Filed: September 29, 2017
    Publication date: February 15, 2018
    Applicant: Quanta Computer, Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180032838
    Abstract: Geometries of the structures and objects deviate from their idealized models, while not always visible to the naked eye. Embodiments of the present invention reveal and visualize such subtle geometric deviations, which can contain useful, surprising information. In an embodiment of the present invention, a method can include fitting a model of a geometry to an input image, matting a region of the input image according to the model based on a sampling function, generating a deviation function based on the matted region, extrapolating the deviation function to an image wide warping field, and generating an output image by warping the input image according to the warping. In an embodiment of the present invention, Deviation Magnification inputs takes a still image or frame, fits parametric models to objects of interest, and generates an output image exaggerating departures from ideal geometries.
    Type: Application
    Filed: July 29, 2016
    Publication date: February 1, 2018
    Inventors: Neal Wadhwa, Tali Dekel, Donglai Wei, Frederic Durand, William T. Freeman
  • Patent number: 9842404
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: December 12, 2017
    Assignee: Massachusetts Institite of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 9811901
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images comprises examining pixel values of the at least two images. The temporal variation of the pixel values between the at least two images can be below a particular threshold. The method can further include applying signal processing to the pixel values.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: November 7, 2017
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman
  • Patent number: 9805475
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images includes converting two or more images to a transform representation. The method further includes, for each spatial position within the two or more images, examining a plurality of coefficient values. The method additionally includes calculating a first vector based on the plurality of coefficient values. The first vector can represent change from a first image to a second image of the at least two images describing deformation. The method also includes modifying the first vector to create a second vector. The method further includes calculating a second plurality of coefficients based on the second vector.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: October 31, 2017
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Michael Rubinstein, Neal Wadhwa, Frederic Durand, William T. Freeman, Hao-yu Wu, Eugene Inghaw Shih, John V. Guttag
  • Patent number: 9756316
    Abstract: Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they preferably use correctly filtered content from multiple viewpoints. The filtered content, however, may not be easily obtained with current stereoscopic production pipelines. The proposed method and system takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that may be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and may be efficiently implemented on current GPUs to yield real-time performance. Furthermore, the ability to retarget disparity is naturally supported. The method is robust and works transparent materials, and specularities. The method provides superior results when compared to the state-of-the-art depth-based rendering methods.
    Type: Grant
    Filed: November 3, 2014
    Date of Patent: September 5, 2017
    Assignee: Massachusetts Institute of Technology
    Inventors: Piotr Krzysztof Didyk, Pitchaya Sitthi-Amorn, Wojciech Matusik, Frederic Durand, William T. Freeman