Patents by Inventor Michael Rubinstein

Michael Rubinstein has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10217218
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: February 26, 2019
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Patent number: 10129658
    Abstract: A method of recovering audio signals and corresponding apparatus according to an embodiment of the present invention using video or other sequence of images enables recovery of sound that causes vibrations of a surface. An embodiment method includes combining representations of local motions of a surface to produce a global motion signal of the surface. The local motions are captured in a series of images of features of the surface, and the global motion signal represents a sound within an environment in which the surface is located. Some embodiments compare representations of local motions of a surface to determine which motions are in-phase or out-of-phase with each other, enabling visualization of surface vibrational modes. Embodiments are passive, as compared to other forms of remote audio recovery that employ active sensing, such as laser microphone systems. Example applications for the embodiments include espionage and surveillance.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: November 13, 2018
    Assignee: Massachusetts Institute of Technology
    Inventors: Michael Rubinstein, Myers Abraham Davis, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180268543
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20180252697
    Abstract: Methods of forming a chip with fluidic channels include forming (e.g., milling) at least one nanofunnel with a wide end and a narrow end into a planar substrate, the nanofunnel having a length, with width and depth dimensions that both vary over its length and forming (e.g., milling) at least one nanochannel into the planar substrate at an interface adjacent the narrow end of the nanofunnel.
    Type: Application
    Filed: May 2, 2018
    Publication date: September 6, 2018
    Inventors: John Michael Ramsey, Laurent Menard, Jinsheng Zhou, Michael Rubinstein, Sergey Panyukov
  • Publication number: 20180223180
    Abstract: Various embodiments disclosed relate to wrinkled capsules for treatment of subterranean formations. In various embodiments, the present invention provides a method of treating a subterranean formation. The method includes placing in the subterranean formation a composition comprising at least one wrinkled capsule. The wrinkled capsule includes a hydrophobic core and a wrinkled shell.
    Type: Application
    Filed: September 2, 2015
    Publication date: August 9, 2018
    Applicant: Halliburton Energy Services, Inc.
    Inventors: Lee J. Hall, Jay Paul Deville, Maria Ina, Sergey Sheyko, Michael Rubinstein
  • Patent number: 10007986
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 26, 2018
    Assignees: MASSACHUSETTS INSTITUTE OF TECHNOLOGY, QUANTA COMPUTER, INC.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Patent number: 9989515
    Abstract: Methods of forming a chip with fluidic channels include forming (e.g., milling) at least one nanofunnel with a wide end and a narrow end into a planar substrate, the nanofunnel having a length, with width and depth dimensions that both vary over its length and forming (e.g., milling) at least one nanochannel into the planar substrate at an interface adjacent the narrow end of the nanofunnel.
    Type: Grant
    Filed: February 7, 2013
    Date of Patent: June 5, 2018
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: John Michael Ramsey, Laurent Menard, Jinsheng Zhou, Michael Rubinstein, Sergey Panyukov
  • Publication number: 20180096482
    Abstract: An apparatus according to an embodiment of the present invention enables measurement and visualization of a refractive field such as a fluid. An embodiment device obtains video captured by a video camera with an imaging plane. Representations of apparent motions in the video are correlated to determine actual motions of the refractive field. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Embodiments can render refractive flow visualizations for augmented reality, wearable devices, and video microscopes.
    Type: Application
    Filed: November 21, 2017
    Publication date: April 5, 2018
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Publication number: 20180047160
    Abstract: In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images.
    Type: Application
    Filed: September 29, 2017
    Publication date: February 15, 2018
    Applicant: Quanta Computer, Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20170359523
    Abstract: The present disclosure relates to systems and methods for image capture. Namely, an image capture system may include a camera configured to capture images of a field of view, a display, and a controller. An initial image of the field of view from an initial camera pose may be captured. An obstruction may be determined to be observable in the field of view. Based on the obstruction, at least one desired camera pose may be determined. The at least one desired camera pose includes at least one desired position of the camera. A capture interface may be displayed, which may include instructions for moving the camera to the at least one desired camera pose. At least one further image of the field of view from the at least one desired camera pose may be captured. Captured images may be processed to remove the obstruction from a background image.
    Type: Application
    Filed: December 28, 2016
    Publication date: December 14, 2017
    Inventors: Michael Rubinstein, William Freeman, Ce Liu
  • Patent number: 9842404
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: December 12, 2017
    Assignee: Massachusetts Institite of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 9811901
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images comprises examining pixel values of the at least two images. The temporal variation of the pixel values between the at least two images can be below a particular threshold. The method can further include applying signal processing to the pixel values.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: November 7, 2017
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Hao-yu Wu, Michael Rubinstein, Eugene Inghaw Shih, John V. Guttag, Frederic Durand, William T. Freeman
  • Patent number: 9805475
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images includes converting two or more images to a transform representation. The method further includes, for each spatial position within the two or more images, examining a plurality of coefficient values. The method additionally includes calculating a first vector based on the plurality of coefficient values. The first vector can represent change from a first image to a second image of the at least two images describing deformation. The method also includes modifying the first vector to create a second vector. The method further includes calculating a second plurality of coefficients based on the second vector.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: October 31, 2017
    Assignees: Massachusetts Institute of Technology, Quanta Computer Inc.
    Inventors: Michael Rubinstein, Neal Wadhwa, Frederic Durand, William T. Freeman, Hao-yu Wu, Eugene Inghaw Shih, John V. Guttag
  • Patent number: 9710917
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: July 18, 2017
    Assignee: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa
  • Patent number: 9338331
    Abstract: Some embodiments are directed to a method, corresponding system, and corresponding apparatus for rendering a video and/or image display to amplify small motions through video magnification. Some embodiments include a new compact image pyramid representation, the Riesz pyramid, that may be used for real-time, high-quality phase-based video magnification. Some embodiments are less overcomplete than even the smallest two orientation, octave-bandwidth complex steerable pyramid. Some embodiments are implemented using compact, efficient linear filters in the spatial domain. Some embodiments produce motion magnified videos that are of comparable quality to those using the complex steerable pyramid. In some embodiments, the Riesz pyramid is used with phase-based video magnification. The Riesz pyramid may phase-shift image features along their dominant orientation, rather than along every orientation like the complex steerable pyramid.
    Type: Grant
    Filed: January 8, 2015
    Date of Patent: May 10, 2016
    Assignee: Massachusetts Institute of Technology
    Inventors: Neal Wadhwa, Michael Rubinstein, Frederic Durand, William T. Freeman
  • Patent number: 9324005
    Abstract: In one embodiment, a method of amplifying temporal variation in at least two images includes converting two or more images to a transform representation. The method further includes, for each spatial position within the two or more images, examining a plurality of coefficient values. The method additionally includes calculating a first vector based on the plurality of coefficient values. The first vector can represent change from a first image to a second image of the at least two images describing deformation. The method also includes modifying the first vector to create a second vector. The method further includes calculating a second plurality of coefficients based on the second vector.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: April 26, 2016
    Assignee: Massachusetts Institute of Technology Quanta Computer Inc.
    Inventors: Neal Wadhwa, Michael Rubinstein, Frederic Durand, William T. Freeman, Hao-Yu Wu, Eugene Inghaw Shih, John V. Guttag
  • Patent number: 9239848
    Abstract: Techniques for semantically annotating images in a plurality of images, each image in the plurality of images comprising at least one image region. The techniques include identifying at least two similar images including a first image and a second image, identifying corresponding image regions in the first image and the second image, and assigning, using at least one processor, annotations to image regions in one or more images in the plurality of images by using a metric of fit indicative of a degree of match between the assigned annotations and the corresponding image regions. The metric of fit may depend on at least one annotation for each image in a subset of the plurality of images and the identified correspondence between image regions in the first image and the second image.
    Type: Grant
    Filed: February 6, 2012
    Date of Patent: January 19, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ce Liu, Michael Rubinstein
  • Publication number: 20150319540
    Abstract: A method of recovering audio signals and corresponding apparatus according to an embodiment of the present invention using video or other sequence of images enables recovery of sound that causes vibrations of a surface. An embodiment method includes combining representations of local motions of a surface to produce a global motion signal of the surface. The local motions are captured in a series of images of features of the surface, and the global motion signal represents a sound within an environment in which the surface is located. Some embodiments compare representations of local motions of a surface to determine which motions are in-phase or out-of-phase with each other, enabling visualization of surface vibrational modes. Embodiments are passive, as compared to other forms of remote audio recovery that employ active sensing, such as laser microphone systems. Example applications for the embodiments include espionage and surveillance.
    Type: Application
    Filed: July 21, 2014
    Publication date: November 5, 2015
    Inventors: Michael Rubinstein, Myers Abraham Davis, Frederic Durand, William T. Freeman, Neal Wadhwa
  • Publication number: 20150195430
    Abstract: Some embodiments are directed to a method, corresponding system, and corresponding apparatus for rendering a video and/or image display to amplify small motions through video magnification. Some embodiments include a new compact image pyramid representation, the Riesz pyramid, that may be used for real-time, high-quality phase-based video magnification. Some embodiments are less overcomplete than even the smallest two orientation, octave-bandwidth complex steerable pyramid. Some embodiments are implemented using compact, efficient linear filters in the spatial domain. Some embodiments produce motion magnified videos that are of comparable quality to those using the complex steerable pyramid. In some embodiments, the Riesz pyramid is used with phase-based video magnification. The Riesz pyramid may phase-shift image features along their dominant orientation, rather than along every orientation like the complex steerable pyramid.
    Type: Application
    Filed: January 8, 2015
    Publication date: July 9, 2015
    Inventors: Neal Wadhwa, Michael Rubinstein, Frederic Durand, William T. Freeman
  • Publication number: 20150016690
    Abstract: An imaging method and corresponding apparatus according to an embodiment of the present invention enables measurement and visualization of fluid flow. An embodiment method includes obtaining video captured by a video camera with an imaging plane. Representations of motions in the video are correlated. A textured background of the scene can be modeled as stationary, with a refractive field translating between background and video camera. This approach offers multiple advantages over conventional fluid flow visualization, including an ability to use ordinary video equipment outside a laboratory without particle injection. Even natural backgrounds can be used, and fluid motion can be distinguished from refraction changes. Depth and three-dimensional information can be recovered using stereo video, and uncertainty methods can enhance measurement robustness where backgrounds are less textured. Example applications can include avionics and hydrocarbon leak detection.
    Type: Application
    Filed: May 15, 2014
    Publication date: January 15, 2015
    Applicant: Massachusetts Institute of Technology
    Inventors: William T. Freeman, Frederic Durand, Tianfan Xue, Michael Rubinstein, Neal Wadhwa