Patents by Inventor Hugues Hoppe

Hugues Hoppe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10818071
    Abstract: Techniques of rendering images includes generating signed distance values (SDVs) along a ray from a specified viewpoint in terms of projected distances along that ray from given depth images. For each pixel in an image of from the perspective of the specified viewpoint, a ray is traced into the three-dimensional scene represented by the image. An iterative step is performed along the ray, obtaining in each iteration a three-dimensional world-space point p. The result is the signed distance sj as measured from depth view Dj. If the absolute value of the signed distance sj is greater than some truncation threshold parameter, the signed distance sj is replaced by a special undefined value. The defined signed-distance values are aggregated to obtain an overall signed distance s. Finally, the roots or zero set (isosurface) of the signed distance field is determined.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: October 27, 2020
    Assignee: Google LLC
    Inventors: Hugues Hoppe, Ricardo Martin Brualla, Harris Nover
  • Publication number: 20200320777
    Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
    Type: Application
    Filed: April 1, 2020
    Publication date: October 8, 2020
    Inventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
  • Patent number: 10757410
    Abstract: Techniques of compressing color video images include computing a delta quantization parameter (?QP) for the color images based on a similarity between the depth image surface normal and the view direction associated with a color image. For example, upon receiving a frame having an image with multiple color and depth images, a computer finds a depth image that is closest in orientation to a color image. For each pixel of that depth image, the computer generates a blend weight based on an orientation of a normal to a position of the depth image and the viewpoints from which the plurality of color images were captured. The computer then generates a value of ?QP based on the blend weight and determines a macroblock of color image corresponding to the position, the macroblock being associated with the value of ?QP for the pixel.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: August 25, 2020
    Assignee: GOOGLE LLC
    Inventors: Hugues Hoppe, True Price
  • Patent number: 9905035
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: February 27, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 9779774
    Abstract: A cinemagraph is generated that includes one or more video loops. A cinemagraph generator receives an input video, and semantically segments the frames to identify regions that correspond to semantic objects and the semantic object depicted in each identified region. Input time intervals are then computed for the pixels of the frames of the input video. An input time interval for a particular pixel includes a per-pixel loop period and a per-pixel start time of a loop at the particular pixel. In addition, the input time interval of a pixel is based, in part, on one or more semantic terms which keep pixels associated with the same semantic object in the same video loop. A cinemagraph is then created using the input time intervals computed for the pixels of the frames of the input video.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: October 3, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sing Bing Kang, Neel Suresh Joshi, Hugues Hoppe, Tae-Hyun Oh, Baoyuan Wang
  • Publication number: 20170154458
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: December 8, 2016
    Publication date: June 1, 2017
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 9547927
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: May 30, 2016
    Date of Patent: January 17, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Publication number: 20160275714
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: May 30, 2016
    Publication date: September 22, 2016
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 9378578
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: June 28, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Publication number: 20160163085
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: February 15, 2016
    Publication date: June 9, 2016
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 9292956
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: March 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 8968091
    Abstract: Human body motion is represented by a skeletal model derived from image data of a user. The model represents joints and bones and has a rigid body portion. The sets of body data are scaled to a predetermined number of sets for a number of periodic units. A body-based coordinate 3-D reference system having a frame of reference defined with respect to a position within the rigid body portion of the skeletal model is generated. The body-based coordinate 3-D reference system is independent of the camera's field of view. The scaled data and representation of relative motion within an orthogonal body-based 3-D reference system decreases the data and simplifies the calculations for determining motion thus enhancing real-time performance for multimedia applications controlled by a user's natural movements.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: March 3, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michalis Raptis, Chuck Noble, Joel Pritchett, Hugues Hoppe, Darko Kirovski
  • Publication number: 20140327680
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: May 3, 2013
    Publication date: November 6, 2014
    Applicant: Microsoft Corporation
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 8872850
    Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: October 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
  • Patent number: 8737769
    Abstract: A dense guide image or signal is used to inform the reconstruction of a target image from a sparse set of target points. The guide image and the set of target points are assumed to be derived from a same real world subject or scene. Potential discontinuities (e.g., tears, edges, gaps, etc.) are first detected in the guide image. The potential discontinuities may be borders of Voronoi regions, perhaps computed using a distance in data space (e.g., color space). The discontinuities and sparse set of points are used to reconstruct the target image. Specifically, pixels of the target image may be interpolated smoothly between neighboring target points, but where neighboring target points are separated by a discontinuity, the interpolation may jump abruptly (e.g., by adjusting or influencing relaxation) at the discontinuity. The target points may be used to select only a subset of the discontinuities to be used during reconstruction.
    Type: Grant
    Filed: November 26, 2010
    Date of Patent: May 27, 2014
    Assignee: Microsoft Corporation
    Inventors: Mark Finch, John Snyder, Hugues Hoppe, Yonatan Wexler
  • Patent number: 8547389
    Abstract: Embodiments are described for a method to generate an image that includes image structure detail captured from a first image and color from a second image. The first image of a defined subject can be obtained from a computer memory. The first image may be a downsampled fine image with image detail. The second image captured of the defined subject in the first image can be obtained from a computer memory. The second image may be a coarse image. A target pixel in the second image can be selected. A target color distribution for a pixel window of the target pixel can then be computed. A source color distribution for a pixel window of a corresponding pixel in the first image can be computed using a computer processor. Further, a statistic of the target pixel can be determined with respect to the target color distribution. The source color in the source color distribution can be computed with the statistic. The target pixel color can then be replaced by the source color.
    Type: Grant
    Filed: April 5, 2010
    Date of Patent: October 1, 2013
    Assignee: Microsoft Corporation
    Inventors: Hugues Hoppe, Charles Han, Matt Uyttendaele
  • Publication number: 20130229581
    Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.
    Type: Application
    Filed: March 5, 2012
    Publication date: September 5, 2013
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
  • Patent number: 8340415
    Abstract: Embodiments are described for a system and method for generating a multi-resolution image pyramid. The method can include obtaining an image captured as a coarse image of a defined subject and a fine image of the defined subject. The fine image can be downsampled to create a temporary image. A further operation is applying a structure transfer operation to the temporary image to transfer color detail from the coarse image. The structure transfer takes place while retaining structural detail from the temporary image. A blending operation can be applied between the temporary image and the fine image to construct an intermediate image for at least one intermediate level in the multi-resolution image pyramid between the fine image and the coarse image.
    Type: Grant
    Filed: April 5, 2010
    Date of Patent: December 25, 2012
    Assignee: Microsoft Corporation
    Inventors: Hugues Hoppe, Charles Han, Matt Uyttendaele
  • Publication number: 20120165098
    Abstract: Human body motion is represented by a skeletal model derived from image data of a user. The model represents joints and bones and has a rigid body portion. The sets of body data are scaled to a predetermined number of sets for a number of periodic units. A body-based coordinate 3-D reference system having a frame of reference defined with respect to a position within the rigid body portion of the skeletal model is generated. The body-based coordinate 3-D reference system is independent of the camera's field of view. The scaled data and representation of relative motion within an orthogonal body-based 3-D reference system decreases the data and simplifies the calculations for determining motion thus enhancing real-time performance for multimedia applications controlled by a user's natural movements.
    Type: Application
    Filed: March 2, 2012
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Michalis Raptis, Chuck Noble, Joel Pritchett, Hugues Hoppe, Darko Kirovski
  • Publication number: 20120134597
    Abstract: A dense guide image or signal is used to inform the reconstruction of a target image from a sparse set of target points. The guide image and the set of target points are assumed to be derived from a same real world subject or scene. Potential discontinuities (e.g., tears, edges, gaps, etc.) are first detected in the guide image. The potential discontinuities may be borders of Voronoi regions, perhaps computed using a distance in data space (e.g., color space). The discontinuities and sparse set of points are used to reconstruct the target image. Specifically, pixels of the target image may be interpolated smoothly between neighboring target points, but where neighboring target points are separated by a discontinuity, the interpolation may jump abruptly (e.g., by adjusting or influencing relaxation) at the discontinuity. The target points may be used to select only a subset of the discontinuities to be used during reconstruction.
    Type: Application
    Filed: November 26, 2010
    Publication date: May 31, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark Finch, John Snyder, Hugues Hoppe, Yonatan Wexler