Patents by Inventor Franck Thudor

Franck Thudor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210400334
    Abstract: Methods and devices are provided to play back a video content item comprising loop sequences. The player manages two or three playing modes and switch between first-in first-out video buffer and circular video buffer with a size adapted to the loop sequences for which the player is in loop mode. A start point and an end point are obtained for each loop sequence of the video. In an embodiment, a code is associated with frames of the video to indicate to which loop(s) a frame belongs to.
    Type: Application
    Filed: November 22, 2019
    Publication date: December 23, 2021
    Inventors: Franck Thudor, Bertrand Chupeau, Renaud Dore
  • Publication number: 20210400305
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on projection maps according to at least two centers of projection, to determine a patch data item lists. One of the centers of projection is selected and corresponding image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Application
    Filed: September 1, 2021
    Publication date: December 23, 2021
    Inventors: Julien FLEUREAU, Thierry TAPIE, Franck THUDOR
  • Publication number: 20210385454
    Abstract: Methods and device for encoding/decoding data representative of depth of a 3D scene. The depth data are quantized in a range of quantized depth values larger than a range of encoding values allowed by a determined encoding bit depth. For blocks of pixels comprising the depth data, a first set of candidate quantization parameters is determined. A second set of quantization parameters is determined as a subset of the union of the first sets. The second set comprising candidate quantization parameters common to a plurality of blocks. One or more quantization parameters of the second set being associated with each block of pixels of the picture. The second set of quantization parameters is encoded, and the quantized depth values are encoded according to the quantization parameters.
    Type: Application
    Filed: April 13, 2018
    Publication date: December 9, 2021
    Inventors: Julien Fleureau, Renaud DORE, Franck THUDOR, Thierry TAPIE
  • Patent number: 11178383
    Abstract: Methods and device for encoding/decoding data representative of a 3D scene. First data representative of texture of the 3D scene visible from a first viewpoint is encoded into first tracks. The first data is arranged in first tiles of a first frame. Second data representative of depth associated with points of the 3D scene is encoded into second tracks. The second data is arranged in second tiles of a second frame, the total number of second tiles being greater than the total number of first tiles. Instructions to extract at least a part of the first data and second data from at least a part of the at least a first track and at least a second track are further encoded into one or more third tracks.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: November 16, 2021
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Julien Fleureau, Bertrand Chupeau, Thierry Tapie, Franck Thudor
  • Patent number: 11122294
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on projection maps according to at least two centers of projection, to determine a patch data item lists. One of the centers of projection is selected and corresponding image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: September 14, 2021
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Julien Fleureau, Thierry Tapie, Franck Thudor
  • Patent number: 11095920
    Abstract: A colored 3D scene is encoded as one or two patch atlas images. Points of the 3D scene belonging a part of the space defined according to a truncated sphere center on a point of view and visible from this point of view are iteratively projected onto projection maps. At each iteration the projected part is removed from the 3D scene and the truncated sphere defining the next part of the scene to be projected is rotated. Once the entirety of the 3D scene is projected on a set of projection maps, pictures are determined within these maps. A picture, also called patch, is a cluster of depth consistent connected pixels. Patches are packed in a depth and a color atlas associated with data comprising an information relative to the rotation of the truncated sphere, so, a decoder can retrieve the projection mapping and proceed to the inverse projection.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: August 17, 2021
    Assignee: InterDigital CE Patent Holdgins, SAS
    Inventors: Julien Fleureau, Bertrand Chupeau, Franck Thudor
  • Patent number: 11025955
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on a projection map to determine a patch data item list. Image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: June 1, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Julien Fleureau, Renaud Dore, Franck Thudor
  • Publication number: 20210112236
    Abstract: Methods and device for encoding/decoding data representative of a 3D scene. First data representative of texture of the 3D scene visible from a first viewpoint is encoded into first tracks. The first data is arranged in first tiles of a first frame. Second data representative of depth associated with points of the 3D scene is encoded into second tracks. The second data is arranged in second tiles of a second frame, the total number of second tiles being greater than the total number of first tiles. Instructions to extract at least a part of the first data and second data from at least a part of the at least a first track and at least a second track are further encoded into one or more third tracks.
    Type: Application
    Filed: March 27, 2019
    Publication date: April 15, 2021
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Thierry TAPIE, Franck THUDOR
  • Publication number: 20210074029
    Abstract: Methods and devices are provided to encode and decode a data stream carrying data representative of a three-dimensional scene, the data stream comprising color pictures packed in a color image; depth pictures packed in a depth image; and a set of patch data items comprising de-projection data; data for retrieving a color picture in the color image and geometry data. Two types of geometry data are possible. The first type of data describes how to retrieve a depth picture in the depth image. The second type of data comprises an identifier of a parametric function and a list of parameter values for the identified parametric function.
    Type: Application
    Filed: January 14, 2019
    Publication date: March 11, 2021
    Inventors: Julien FLEUREAU, Renaud DORE, Franck THUDOR
  • Publication number: 20200380765
    Abstract: Encoding/decoding data representative of a 3D representation of a scene according to a range of points of view can involve generating a depth map associated with a part of the 3D representation according to a parameter representative of a two-dimensional parameterization associated with the part and data associated with a point included in the part, wherein the two-dimensional parameterization can be responsive to geometric information associated with the point and to pose information associated with the range of points of view. A texture map associated with the part can be generated according to the parameter and data associated with the point. First information representative of point density of points in a part of the part can be obtained. The depth map, texture map, parameter, and first information can be included in respective syntax elements of a bitstream.
    Type: Application
    Filed: November 6, 2018
    Publication date: December 3, 2020
    Inventors: Franck THUDOR, Bertrand CHUPEAU, Renaud DORE, Thierry TAPIE, Julien FLEUREAU
  • Publication number: 20200374559
    Abstract: A colored 3D scene is encoded as one or two patch atlas images. Points of the 3D scene belonging a part of the space defined according to a truncated sphere center on a point of view and visible from this point of view are iteratively projected onto projection maps. At each iteration the projected part is removed from the 3D scene and the truncated sphere defining the next part of the scene to be projected is rotated. Once the entirety of the 3D scene is projected on a set of projection maps, pictures are determined within these maps. A picture, also called patch, is a cluster of depth consistent connected pixels. Patches are packed in a depth and a color atlas associated with data comprising an information relative to the rotation of the truncated sphere, so, a decoder can retrieve the projection mapping and proceed to the inverse projection.
    Type: Application
    Filed: November 29, 2018
    Publication date: November 26, 2020
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Franck THUDOR
  • Publication number: 20200314449
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on a projection map to determine a patch data item list. Image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Application
    Filed: July 12, 2018
    Publication date: October 1, 2020
    Inventors: Julien FLEUREAU, Renaud DORE, Franck THUDOR
  • Publication number: 20200228777
    Abstract: A sequence of three-dimension scenes is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of 3D scenes. Points of a 3D scene visible from a determined point of view are encoded as a color image in a first track of the stream in order to be decodable independently from other tracks of the stream. The color image is compatible with a three degrees of freedom rendering. Depth information and depth and color of residual points of the scene are encoded in separate tracks of the stream and are decoded only in case the decoder is configured to decode the scene for a volumetric rendering.
    Type: Application
    Filed: September 11, 2018
    Publication date: July 16, 2020
    Inventors: Renaud DORE, Julien FLEUREAU, Bertrand CHUPEAU, Gerand BRIAND, Thierry TAPIE, Franck THUDOR
  • Publication number: 20200154137
    Abstract: A sequence of point clouds is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of point clouds. Visible points of a point cloud are iteratively projected on projection maps according to at least two centers of projection, to determine a patch data item lists. One of the centers of projection is selected and corresponding image patches are generated and packed into a picture. Pictures and associated patch data item list are encoded in a stream. The decoding method decodes pictures and associated patch data item lists. Pixels of image patches comprised in pictures are un-projected according to data stored in associated patches. The methods have the advantage of encoding every point of point clouds in a manner avoiding artifacts and allowing decoding at video frame rate.
    Type: Application
    Filed: July 16, 2018
    Publication date: May 14, 2020
    Inventors: Julien FLEUREAU, Thierry TAPIE, Franck THUDOR
  • Patent number: 10412202
    Abstract: A method of encoding a packet of data representative of a haptic effect, the haptic affect being associated with an immersive content is described. The method includes adding a first information in the packet, the first information being representative of a dependency of said haptic effect with regard to a position of a body model in relation to the immersive content. Thereafter, a second information in the packet is added, the second information being representative of at least a first part of the body model targeted by the haptic effect. A method for decoding the packet and corresponding devices are also described.
    Type: Grant
    Filed: May 22, 2016
    Date of Patent: September 10, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Bertrand Leroy, Franck Thudor
  • Publication number: 20180184096
    Abstract: The present disclosure relates to methods and apparatus for encoding and decoding pixel lists of an image. The pixels of an image are clustered according to at least one relationship. Clusters are stored as pixel lists. An integer matrix of the same size as the image is generated. A pixel of the image at the first index belongs to one list. The next pixel in the list has a second index. An integer is calculated according to the two indices and is stored at the first index in the matrix. The matrix may be compressed according to a lossless compression method. At decoding, a pixel with a first index in the image is associated with a pixel list. A second index is calculated according to the integer at the first index in the matrix and the pixel at the second index in the image is added to the list.
    Type: Application
    Filed: December 27, 2017
    Publication date: June 28, 2018
    Inventors: Julien FLEUREAU, Didier DOYEN, Franck THUDOR
  • Publication number: 20170347055
    Abstract: Method and device for generating a stream from image(s) of an object is disclosed. The method includes the steps of obtaining data associated with points of a point cloud representing at least a part of the object, obtaining a parametric surface according to at least a geometric characteristic associated with the at least a part of the object and pose information of an acquisition device used to acquire the at least one image, obtaining a height map and one or more texture maps associated with the parametric surface, and generating the stream by combining together a first syntax element relative to the at least a parameter, a second syntax element relative to the height map, a third syntax element relative to the at least one texture map and a fourth syntax element relative to a position of the acquisition device. The disclosure relates further to a method and device for rendering an image of the object from the stream thus obtained.
    Type: Application
    Filed: May 14, 2017
    Publication date: November 30, 2017
    Inventors: Renaud Dore, Julien Fleureau, Thierry Tapie, Franck Thudor
  • Publication number: 20170289485
    Abstract: A method for displaying a plurality of videos is disclosed. The method comprises: displaying a main video in a main graphical unit; displaying a secondary video among said plurality of videos in each of at least a secondary graphical unit; wherein the size or structure or position or transparency or overlap or shape of the at least one secondary graphical unit depends on information representative of spatio-temporal connectivity between a segment currently displayed of the main video and a segment currently displayed of the secondary.
    Type: Application
    Filed: September 4, 2015
    Publication date: October 5, 2017
    Inventors: Tomas Enrique CRIVELLI, Marta EXPOSITO, Franck THUDOR
  • Publication number: 20160353182
    Abstract: The invention relates to a method and a device for synchronising metadata associated by a first signature to a first version of an audiovisual document, with a second version of this audiovisual document. The method is characterised in that it synchronises the metadata with the second version of the audiovisual document from a second signature detected in the portion of the second version of the audiovisual document, said portion of the second version of the audiovisual document being obtained by detecting the first signature in the second version of the audiovisual document. In this way, the precision of the synchronisation between the two items of video content carried out by the first signature is improved by the second signature, and new, more accurate metadata is created.
    Type: Application
    Filed: December 22, 2014
    Publication date: December 1, 2016
    Inventors: Pierre Hellier, Franck Thudor, Lionel Oisel
  • Publication number: 20160352872
    Abstract: A method of encoding a packet of data representative of a haptic effect, the haptic affect being associated with an immersive content is described. The method includes adding a first information in the packet, the first information being representative of a dependency of said haptic effect with regard to a position of a body model in relation to the immersive content. Thereafter, a second information in the packet is added, the second information being representative of at least a first part of the body model targeted by the haptic effect. A method for decoding the packet and corresponding devices are also described.
    Type: Application
    Filed: May 22, 2016
    Publication date: December 1, 2016
    Inventors: JULIEN FLEUREAU, BERTRAND LEROY, FRANCK THUDOR