Patents by Inventor Frederic Durand

Frederic Durand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240083599
    Abstract: A spacecraft for the distribution of electrical energy to client craft at points situated in free space, in orbit and/or on a celestial body includes a main structure equipped with an electric thruster, with a chemical thruster and with a solar generator, a first fuel container for fuel intended for the electric thruster, and a second fuel container for fuel intended for the chemical thruster. The spacecraft is able to be modulated such that the main structure can be coupled/decoupled alternatively to/from the first container or the second container, the first container and the second container are able to be coupled/decoupled to/from one another, and the solar generator can be deployed or retracted.
    Type: Application
    Filed: November 13, 2023
    Publication date: March 14, 2024
    Applicants: Centre National d'Études Spatiales, Safran Spacecraft Propulsion, THALES
    Inventors: Pascal BULTEL, Gautier DURAND, Nicolas THIRY, Marie ANSART, Gilles BOUHOURS, Olivier DUCHEMIN, Frédéric MARCHANDISE
  • Patent number: 11672436
    Abstract: Heart rates and beat lengths of a subject can be extracted from videos of the subject by measuring subtle head motion caused by the Newtonian reaction to the influx of blood at each beat. Embodiments track features on the video images of the subject's head and perform principal component analysis (PCA) to decompose the feature location-time series into a set of component motions. The method or system then selects a component that best corresponds to heartbeats based on its temporal frequency spectrum. Finally, the motion projected to this component is analyzed and peaks of the location-time series are identified, which correspond to heartbeats. Pulse rate measurements or heart rate measurements of the subject are output.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: June 13, 2023
    Assignee: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Guha Balakrishnan, John V. Guttag, Frederic Durand
  • Patent number: 10997329
    Abstract: Structural health monitoring (SHM) is essential but can be expensive to perform. In an embodiment, a method includes sensing vibrations at a plurality of locations of a structure by a plurality of time-synchronized sensors. The method further includes determining a first set of dependencies of all sensors of the time-synchronized sensors at a first sample time to any sensors of a second sample time, and determining a second set of dependencies of all sensors of the time-synchronized sensors at the second sample time to any sensors of a third sample time. The second sample time is later than the first sample time, and the third sample time is later than the second sample time. The method then determines whether the structure has changed if the first set of dependencies is different from the second set of dependencies. Therefore, automated SHM can ensure safety at a lower cost to building owners.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: May 4, 2021
    Assignees: Massachusetts Institute of Technology, Shell Oil Company
    Inventors: William T. Freeman, Oral Buyukozturk, John W. Fisher, III, Frederic Durand, Hossein Mobahi, Neal Wadhwa, Zoran Dzunic, Justin G. Chen, James Long, Reza Mohammadi Ghazi, Theodericus Johannes Henricus Smit, Sergio Daniel Kapusta
  • Patent number: 10972713
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 6, 2021
    Assignee: Massachusetts Institute of Technology
    Inventors: Wojciech Matusik, Piotr K. Didyk, William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Patent number: 10834372
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: November 10, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Wojciech Matusik, Piotr K. Didyk, William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Publication number: 20200214575
    Abstract: Heart rates and beat lengths of a subject can be extracted from videos of the subject by measuring subtle head motion caused by the Newtonian reaction to the influx of blood at each beat. Embodiments track features on the video images of the subject's head and perform principal component analysis (PCA) to decompose the feature location-time series into a set of component motions. The method or system then selects a component that best corresponds to heartbeats based on its temporal frequency spectrum. Finally, the motion projected to this component is analyzed and peaks of the location-time series are identified, which correspond to heartbeats. Pulse rate measurements or heart rate measurements of the subject are output.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: Guha Balakrishnan, John V. Guttag, Frederic Durand, Ph.D.
  • Publication number: 20200145634
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Application
    Filed: December 23, 2019
    Publication date: May 7, 2020
    Inventors: Wojciech Matusik, Piotr K. Didyk, William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Patent number: 10638942
    Abstract: Heart rates and beat lengths can be extracted from videos by measuring subtle head motion caused by the Newtonian reaction to the influx of blood at each beat. In an embodiment of the present invention, a method tracks features on the head and performs principal component analysis (PCA) to decompose their trajectories into a set of component motions. The method then selects a component that best corresponds to heartbeats based on its temporal frequency spectrum. Finally, the motion projected to this component is analyzed and peaks of the trajectories are identified, which correspond to heartbeats.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: May 5, 2020
    Assignee: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Guha Balakrishnan, John V. Guttag, Frederic Durand
  • Patent number: 10636149
    Abstract: An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: April 28, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 10380745
    Abstract: A method and corresponding apparatus for measuring object motion using camera images may include measuring a global optical flow field of a scene. The scene may include target and reference objects captured in an image sequence. Motion of a camera used to capture the image sequence may be determined relative to the scene by measuring an apparent, sub-pixel motion of the reference object with respect to an imaging plane of the camera. Motion of the target object corrected for the camera motion may be calculated based on the optical flow field of the scene and on the apparent, sub-pixel motion of the reference object with respect to the imaging plane of the camera. Embodiments may enable measuring vibration of structures and objects from long distance in relatively uncontrolled settings, with or without accelerometers, with high signal-to-noise ratios.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: August 13, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Oral Buyukozturk, William T. Freeman, Frederic Durand, Myers Abraham Davis, Neal Wadhwa, Justin G. Chen
  • Patent number: 10354397
    Abstract: Embodiments can be used to synthesize physically plausible animations of target objects responding to new, previously unseen forces. Knowledge of scene geometry or target material properties is not required, and a basis set for creating realistic synthesized motions can be developed using only input video of the target object. Embodiments can enable new animation and video production techniques.
    Type: Grant
    Filed: March 11, 2016
    Date of Patent: July 16, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Myers Abraham Davis, Frederic Durand, Justin G. Chen
  • Patent number: 10288420
    Abstract: In one embodiment, a method comprises projecting, from a projector, a diffused on an object. The method further includes capturing, with a first camera in a particular location, a reference image of the object while the diffused is projected on the object. The method further includes capturing, with a second camera positioned in the particular location, a test image of the object while the diffused is projected on the object. The method further includes comparing speckles in the reference image to the test image. The projector, first camera and second camera are removably provided to and positioned in a site of the object.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: May 14, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: YiChang Shih, Myers Abraham Davis, Samuel Wiliam Hasinoff, Frederic Durand, William T. Freeman
  • Patent number: 10217187
    Abstract: The method for dynamic video magnification magnifies small motions occurring simultaneously within large motions. The method involves selecting a region of interest from a video for magnification. The region of interest is warped to obtain a stabilized sequence of frames that discounts large motions. Each frame of the stabilized sequence is decomposed to a foreground layer, a background layer, and an alpha matte layer, and each of the foreground and alpha matte layers is magnified. Then a magnified sequence is generated from the magnified layers using matte inversion. Any image holes in the magnified sequence are filled in using texture synthesis. Finally, the magnified sequence is de-warped to the original space-time coordinates.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: February 26, 2019
    Assignees: QATAR FOUNDATION FOR EDUCATION, SCIENCE AND IMMUNITY DEVELOPMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Mohamed Abdelaziz A. Mohamed Elgharib, Mohamed M. Hefeeda, William T. Freeman, Frederic Durand
  • Patent number: 10217218
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: February 26, 2019
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20190035086
    Abstract: A method and corresponding apparatus for measuring object motion using camera images may include measuring a global optical flow field of a scene. The scene may include target and reference objects captured in an image sequence. Motion of a camera used to capture the image sequence may be determined relative to the scene by measuring an apparent, sub-pixel motion of the reference object with respect to an imaging plane of the camera. Motion of the target object corrected for the camera motion may be calculated based on the optical flow field of the scene and on the apparent, sub-pixel motion of the reference object with respect to the imaging plane of the camera. Embodiments may enable measuring vibration of structures and objects from long distance in relatively uncontrolled settings, with or without accelerometers, with high signal-to-noise ratios.
    Type: Application
    Filed: February 28, 2017
    Publication date: January 31, 2019
    Inventors: Oral Buyukozturk, William T. Freeman, Frederic Durand, Myers Abraham Davis, Neal Wadhwa, Justin G. Chen
  • Publication number: 20180352208
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Application
    Filed: June 5, 2018
    Publication date: December 6, 2018
    Inventors: Wojciech Matusik, Piotr K. Didyk, Ph.D., William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Patent number: 10129658
    Abstract: A method of recovering audio signals and corresponding apparatus according to an embodiment of the present invention using video or other sequence of images enables recovery of sound that causes vibrations of a surface. An embodiment method includes combining representations of local motions of a surface to produce a global motion signal of the surface. The local motions are captured in a series of images of features of the surface, and the global motion signal represents a sound within an environment in which the surface is located. Some embodiments compare representations of local motions of a surface to determine which motions are in-phase or out-of-phase with each other, enabling visualization of surface vibrational modes. Embodiments are passive, as compared to other forms of remote audio recovery that employ active sensing, such as laser microphone systems. Example applications for the embodiments include espionage and surveillance.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: November 13, 2018
    Assignee: Massachusetts Institute of Technology
    Inventors: Michael Rubinstein, Myers Abraham Davis, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180268543
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180249145
    Abstract: Automultiscopic displays enable glasses-free 3D viewing by providing both binocular and motion parallax. Within the display field of view, different images are observed depending on the viewing direction. When moving outside the field of view, the observed images may repeat. Light fields produced by lenticular and parallax-barrier automultiscopic displays may have repetitive structure with significant discontinuities between the fields of view. This repetitive structure induces visual artifacts in the form of view discontinuities, depth reversals, and extensive disparities. To overcome this problem, a method modifies the presented light field automultiscopic image content and makes it more repetitive. In the method, a light field is refined using global and local shearing and then the repeating fragments are stitched. The method reduces the discontinuities in the displayed light field and leads to visual quality improvements.
    Type: Application
    Filed: April 11, 2018
    Publication date: August 30, 2018
    Inventors: Piotr Krzysztof Didyk, Song-Pei Du, Frederic Durand, Ph.D., Wojciech Matusik
  • Publication number: 20180225803
    Abstract: The method for dynamic video magnification magnifies small motions occurring simultaneously within large motions. The method involves selecting a region of interest from a video for magnification. The region of interest is warped to obtain a stabilized sequence of frames that discounts large motions. Each frame of the stabilized sequence is decomposed to a foreground layer, a background layer, and an alpha matte layer, and each of the foreground and alpha matte layers is magnified. Then a magnified sequence is generated from the magnified layers using matte inversion. Any image holes in the magnified sequence are filled in using texture synthesis. Finally, the magnified sequence is de-warped to the original space-time coordinates.
    Type: Application
    Filed: June 3, 2016
    Publication date: August 9, 2018
    Applicants: QATAR FOUNDATION FOR EDUCATION, SCIENCE AND COMMUNITY DEVELOPMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: MOHAMED ABDELAZIZ A. MOHAMED ELGHARIB, MOHAMED M. HEFEEDA, WILLIAM T. FREEMAN, FREDERIC DURAND