Patents by Inventor Julien Fleureau

Julien Fleureau has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11122101
    Abstract: The present disclosure relates to methods, devices or streams for encoding, transmitting and decoding two-dimension point clouds. When encoding point clouds as frames, a large number of pixels are not used. A dense mapping operator optimizes the use of pixels but requires a lot of data to be encoded in the stream and its inverse operator is difficult to compute. A simplified mapping operator is generated according to a dense mapping operator and is stored as matrices of two-dimension coordinates representative of an unfolded grid which requires low space in the stream. The inverse operator is easy to generate according to the unfolded grid.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: September 14, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Julien Fleureau, Bertrand Chupeau, Renaud Dore
  • Patent number: 11122294
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on projection maps according to at least two centers of projection, to determine a patch data item lists. One of the centers of projection is selected and corresponding image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: September 14, 2021
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Julien Fleureau, Thierry Tapie, Franck Thudor
  • Publication number: 20210274147
    Abstract: A method and a device provide for transmitting information representing a viewpoint in a 3D scene represented with a set of volumetric video contents; receiving a first volumetric video content of the set, the first volumetric video content being according to a range of points of view comprising the viewpoint, the first volumetric video content being represented with a set of first patches, each of which corresponds to a 2D parametrization of a first group of points in a 3D part of the 3D scene associated with the first volumetric video content, and at least one first patch refers to an area of a second patch corresponding to a 2D parametrization of a second group of points in another 3D part of the 3D scene associated with a second volumetric video content of the set of volumetric video contents.
    Type: Application
    Filed: June 21, 2019
    Publication date: September 2, 2021
    Inventors: Julien FLEUREAU, Renaud DORE, Charles SALMON-LEGAGNEUR, Remi HOUDAILLE
  • Patent number: 11095920
    Abstract: A colored 3D scene is encoded as one or two patch atlas images. Points of the 3D scene belonging a part of the space defined according to a truncated sphere center on a point of view and visible from this point of view are iteratively projected onto projection maps. At each iteration the projected part is removed from the 3D scene and the truncated sphere defining the next part of the scene to be projected is rotated. Once the entirety of the 3D scene is projected on a set of projection maps, pictures are determined within these maps. A picture, also called patch, is a cluster of depth consistent connected pixels. Patches are packed in a depth and a color atlas associated with data comprising an information relative to the rotation of the truncated sphere, so, a decoder can retrieve the projection mapping and proceed to the inverse projection.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: August 17, 2021
    Assignee: InterDigital CE Patent Holdgins, SAS
    Inventors: Julien Fleureau, Bertrand Chupeau, Franck Thudor
  • Patent number: 11064213
    Abstract: The present disclosure relates to method for embedding key information in an image, the method comprising reserving a range of DMZ values, in a predetermined range of 2N values used for storing useful data in the image, the reserved range being used for storing a key information associated with at least one coordinates in the image, with N>0 and DMZ<<2N.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: July 13, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Julien Fleureau, Renaud Dore, Thierry Tapie
  • Publication number: 20210176496
    Abstract: Encoding/decoding data representative of a 3D representation according to a range of points of view can involve generating a depth map associated with a part of the 3D representation according to a parameter representative of at least a 2D parametrization associated with the part and data associated with a point of the part, and generating according to the parameter and the data a texture map associated with the part, where information representative of a variation of a quantization parameter within the depth map and/or the texture map can be obtained according to a region of interest of the 3D representation.
    Type: Application
    Filed: October 23, 2018
    Publication date: June 10, 2021
    Inventors: Bertrand CHUPEAU, Franck GALPIN, Julien FLEUREAU
  • Patent number: 11025955
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on a projection map to determine a patch data item list. Image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: June 1, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Julien Fleureau, Renaud Dore, Franck Thudor
  • Publication number: 20210112236
    Abstract: Methods and device for encoding/decoding data representative of a 3D scene. First data representative of texture of the 3D scene visible from a first viewpoint is encoded into first tracks. The first data is arranged in first tiles of a first frame. Second data representative of depth associated with points of the 3D scene is encoded into second tracks. The second data is arranged in second tiles of a second frame, the total number of second tiles being greater than the total number of first tiles. Instructions to extract at least a part of the first data and second data from at least a part of the at least a first track and at least a second track are further encoded into one or more third tracks.
    Type: Application
    Filed: March 27, 2019
    Publication date: April 15, 2021
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Thierry TAPIE, Franck THUDOR
  • Patent number: 10958950
    Abstract: The present disclosure relates to methods, apparatus or systems for formatting of backward compatible immersive video streams. At least one legacy rectangular video is captured from an immersive video obtained from a source (82). A set of camera control data are used to determine which parts of the immersive video will constitute legacy videos (84). These part are removed from the immersive video (83) and every prepared videos are packaged in a stream (85). The structure from the stream is a container. Information data about the location and size of removed parts may be added in the stream.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: March 23, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Renaud Dore, Julien Fleureau, Thierry Tapie
  • Publication number: 20210082197
    Abstract: A method of and apparatus configured to perform obtaining a captured image of a real environment. The real environment includes a device having a screen. The captured image includes the device having the screen. The pose of the screen is determined based on the captured image. From a source other than the captured image, 2D content to be displayed on a representation of the screen in the virtual scene is obtained. The 2D content is projected to produce projected 2D content. The projected 2D content aligned to the pose of the screen. The virtual scene is generated as a combination of a virtual content item and the projected 2D content.
    Type: Application
    Filed: October 22, 2020
    Publication date: March 18, 2021
    Inventors: Sylvain THIEBAUD, Julien FLEUREAU, Francois GERARD
  • Publication number: 20210074025
    Abstract: Methods and devices are provided to encode and decode a data stream carrying data representative of a three-dimensional scene, the data stream comprising color pictures packed in a color image; depth pictures packed in a depth image; and a set of patch data items comprising de-projection data; data for retrieving a color picture in the color image and geometry data. These data are inserted in the video track of the stream with associated temporal information. The color and depth pictures that are repeated at least two times in a sequence of patches are not packed in the images but inserted in the image track of the stream and are pointed by color or geometry data.
    Type: Application
    Filed: January 14, 2019
    Publication date: March 11, 2021
    Inventors: Julien Fleureau, Bertrand CHUPEAU, Renaud DORE, Mary-Luc CHAMPEL
  • Publication number: 20210074029
    Abstract: Methods and devices are provided to encode and decode a data stream carrying data representative of a three-dimensional scene, the data stream comprising color pictures packed in a color image; depth pictures packed in a depth image; and a set of patch data items comprising de-projection data; data for retrieving a color picture in the color image and geometry data. Two types of geometry data are possible. The first type of data describes how to retrieve a depth picture in the depth image. The second type of data comprises an identifier of a parametric function and a list of parameter values for the identified parametric function.
    Type: Application
    Filed: January 14, 2019
    Publication date: March 11, 2021
    Inventors: Julien FLEUREAU, Renaud DORE, Franck THUDOR
  • Publication number: 20210012515
    Abstract: The present invention generally relates to an apparatus and a method for obtaining a registration error map representing a level of sharpness of an image. Many methods are known which allow determining the position of a camera with respect to an object, based on the knowledge of a 3D model of the object and the intrinsic parameters of the camera. However, regardless of the visual servoing technique used, there is no control in the image space and the object may get out of the camera field of view during servoing. It is proposed to obtain a registration error map relating to an image of the object of interest generated by computing an intersection of a re-focusing surface obtained from a 3D model of said object of interest and a focal stack based on acquired four-dimensional light-field data relating to said object of interest.
    Type: Application
    Filed: September 22, 2020
    Publication date: January 14, 2021
    Inventors: Julien Fleureau, Pierre Hellier, Benoit Vandame
  • Patent number: 10891784
    Abstract: Method and device for generating a stream of data representative of a 3D point cloud. The 3D point cloud is partitioned into a plurality of 3D elementary parts. A set of two-dimensional 2D parametrizations is determined, each 2D parametrization representing one 3D part of the point cloud with a set of parameters. Each 3D part is represented as a 2D pixel image. A depth map and a color map are determined as a first patch atlas and a second patch atlas. A data stream is generated by combining and/or coding the parameters of the 2D parametrization, the first patch atlas, the second patch atlas and mapping information that links each 2D parametrization with its associated depth map and color map in respectively the first and second patch atlas.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: January 12, 2021
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Renaud Dore, Franck Galpin, Gerard Briand, Julien Fleureau
  • Patent number: 10885658
    Abstract: The present disclosure relates to methods, apparatus or systems for determining a final pose (21) of a rendering device. An initial pose is associated with the rendering device. A module (25) determines an intermediate pose (26) according to data from absolute pose sensors (23) and/or differential pose sensors (22). A module (27) determines the final pose (21) according to, first, a difference between the intermediate pose (26) and the initial pose information, second, the data from differential pose sensors (22), and third an evaluation of the visual perception of movements for current images (24) displayed on the rendering device.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: January 5, 2021
    Assignee: InterDigital CE Patent Holdings, SAS.
    Inventors: Julien Fleureau, Franck Galpin, Xavier Burgos
  • Publication number: 20200380765
    Abstract: Encoding/decoding data representative of a 3D representation of a scene according to a range of points of view can involve generating a depth map associated with a part of the 3D representation according to a parameter representative of a two-dimensional parameterization associated with the part and data associated with a point included in the part, wherein the two-dimensional parameterization can be responsive to geometric information associated with the point and to pose information associated with the range of points of view. A texture map associated with the part can be generated according to the parameter and data associated with the point. First information representative of point density of points in a part of the part can be obtained. The depth map, texture map, parameter, and first information can be included in respective syntax elements of a bitstream.
    Type: Application
    Filed: November 6, 2018
    Publication date: December 3, 2020
    Inventors: Franck THUDOR, Bertrand CHUPEAU, Renaud DORE, Thierry TAPIE, Julien FLEUREAU
  • Publication number: 20200374559
    Abstract: A colored 3D scene is encoded as one or two patch atlas images. Points of the 3D scene belonging a part of the space defined according to a truncated sphere center on a point of view and visible from this point of view are iteratively projected onto projection maps. At each iteration the projected part is removed from the 3D scene and the truncated sphere defining the next part of the scene to be projected is rotated. Once the entirety of the 3D scene is projected on a set of projection maps, pictures are determined within these maps. A picture, also called patch, is a cluster of depth consistent connected pixels. Patches are packed in a depth and a color atlas associated with data comprising an information relative to the rotation of the truncated sphere, so, a decoder can retrieve the projection mapping and proceed to the inverse projection.
    Type: Application
    Filed: November 29, 2018
    Publication date: November 26, 2020
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Franck THUDOR
  • Patent number: 10846932
    Abstract: A method and device for compositing and/or transmitting a first image to a first display device, the method comprising receiving a second image representative of a scene, the scene comprising a second display device displaying a third image; receiving the third image; obtaining a first information representative of pose of the second display device with respect to the scene; distorting the third image according to the first information; generating the first image by combining the second image and the distorted third image using the obtained first information; and transmitting data representative of the first image.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: November 24, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Sylvain Thiebaud, Julien Fleureau, Francois Gerard
  • Publication number: 20200344493
    Abstract: Methods and devices are provided to encode and decode a data stream carrying data representative of a three-dimensional scene, the data stream comprising color pictures packed in a color image; depth pictures packed in a depth image; and a set of patch data items comprising de-projection data; data for retrieving a color picture in the color image and geometry data. Two types of geometry data are possible. The first type of data describes how to retrieve a depth picture in the depth image. The second type of data comprises an identifier of a 3D mesh. Vertex coordinates and faces of this mesh are used to retrieve the location of points in the de-projected scene.
    Type: Application
    Filed: January 4, 2019
    Publication date: October 29, 2020
    Inventors: Julien FLEUREAU, Renaud DORE, Thierry TAPIE
  • Patent number: 10818020
    Abstract: The present invention generally relates to an apparatus and a method for obtaining a registration error map representing a level of sharpness of an image. Many methods are known which allow determining the position of a camera with respect to an object, based on the knowledge of a 3D model of the object and the intrinsic parameters of the camera. However, regardless of the visual servoing technique used, there is no control in the image space and the object may get out of the camera field of view during servoing. It is proposed to obtain a registration error map relating to an image of the object of interest generated by computing an intersection of a re-focusing surface obtained from a 3D model of said object of interest and a focal stack based on acquired four-dimensional light-field data relating to said object of interest.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: October 27, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Pierre Hellier, Benoit Vandame