Patents by Inventor Gérard Briand

Gérard Briand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11721044
    Abstract: Generating an image from a source image can involve encoding a projection of a part of a three-dimensional scene. Pixels of a source image comprise a depth and a color attribute. Pixels of a source image are de-projected as colored point cloud. A de-projected point in a 3D space has the color attribute of the pixel that it has been de-projected from. Also, a score is attributed to the generated point according to a local depth gradient and/or a local color gradient of the pixel it comes from, the lower the gradient, the higher the score. The generated point cloud is captured by a virtual camera for rendering on a display device. The point cloud is projected onto the viewport image by blending color of points projected on a same pixel, the blending being weighted by the scores of these points.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: August 8, 2023
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Julien Fleureau, Gerard Briand, Renaud Dore
  • Publication number: 20230217006
    Abstract: Methods, apparatuses and streams are disclosed for transmitting tiled volumetric video and, at the receiver, for generating an atlas image compatible with a legacy decoder. At the server side, viewport information is obtained and a first list of central tiles and a second list of border tiles are selected. A central tile is a part of an image obtained by projecting the 3D scene onto an image plane according to a central point of view. A border tile is an image comprising dis-occluding patches. Sizes and shapes of border tiles are function of size and shape of central tiles. At the client side, tiles are arranged according to a layout selected in a set of layouts according to the number, sizes and shapes of border tiles. FIG. 8.
    Type: Application
    Filed: September 2, 2020
    Publication date: July 6, 2023
    Inventors: Bertrand Chupeau, Gerard Briand, Thierry Tapie
  • Publication number: 20220345681
    Abstract: Methods, devices and stream for encoding, decoding and transmitting a multi-views frame are disclosed. In a multi-views frame some of the views are more trustable than others. The multi-views frame is encoded in a data stream in association with metadata that comprise, for at least one of the views, a parameter indicating a degree of confidence in the information carried by this view. This information is used at the decoding side to determine the contribution of the view when synthetizing pixels of a viewport frame for a given point of view in the 3D space.
    Type: Application
    Filed: October 1, 2020
    Publication date: October 27, 2022
    Inventors: Julien Fleureau, Bertrand Chupeau, Thierry Tapie, Gerard Briand
  • Publication number: 20220264150
    Abstract: At least one embodiment relates to a method and apparatus for encoding a volumetric video representing a scene, said encoding being based on patches representing the color and depth of a 2D projection of subparts of the scene, wherein a first patch is packed in a second patch for a given time interval lower than or equal to a time period along which the second patch is defined when said first patch can be packed in said second patch over said time interval. Decoding method and apparatus are also provided.
    Type: Application
    Filed: June 23, 2020
    Publication date: August 18, 2022
    Inventors: Julien Fleureau, Franck Thudor, Gerard Briand, Renaud Dore
  • Publication number: 20220256134
    Abstract: Methods, devices and data stream are provided for signaling and decoding information representative of restrictions of navigation in a volumetric video. The data stream comprises metadata associated to video data representative of the volumetric video. The metadata comprise data representative of a viewing bounding box, data representative of a curvilinear path in the 3D space of said volumetric video; and data representative of at least one viewing direction range associated with a point on the curvilinear path.
    Type: Application
    Filed: July 14, 2020
    Publication date: August 11, 2022
    Inventors: Bertrand Chupeau, Gérard Briand, Renaud Dore
  • Publication number: 20220254068
    Abstract: Generating an image from a source image can involve encoding a projection of a part of a three-dimensional scene. Pixels of a source image comprise a depth and a color attribute. Pixels of a source image are de-projected as colored point cloud. A de-projected point in a 3D space has the color attribute of the pixel that it has been de-projected from. Also, a score is attributed to the generated point according to a local depth gradient and/or a local color gradient of the pixel it comes from, the lower the gradient, the higher the score. The generated point cloud is captured by a virtual camera for rendering on a display device. The point cloud is projected onto the viewport image by blending color of points projected on a same pixel, the blending being weighted by the scores of these points.
    Type: Application
    Filed: May 26, 2020
    Publication date: August 11, 2022
    Inventors: Julien Fleureau, Gerard Briand, Renaud Dore
  • Publication number: 20220167015
    Abstract: A method and a device are disclosed to encode volumetric video in a patch-based atlas format in intra-periods of varying length. A first atlas layout is built for a first sequence of 3D scenes. The number of 3D scenes in the sequence is chosen to fit the size of a GoP of the codec. A second sequence is iteratively set up by appending the next 3D scene of the sequence to encode while the number of patches of the layout built for this iterative second sequence is lower than or equal to the number of patches of the first layout. When iterations end, one of the layouts is selected to generate every atlas of the group. In such a way, size of metadata is decreased and compression is enhanced.
    Type: Application
    Filed: March 19, 2020
    Publication date: May 26, 2022
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Gerard BRIAND, Renaud DORE, Franck THUDOR
  • Publication number: 20220138990
    Abstract: A sequence of three-dimension scenes is encoded as a video by an encoder and transmitted to a decoder which retrieves the sequence of 3D scenes. Points of a 3D scene visible from a determined point of view are encoded as a color image in a first track of the stream in order to be decodable independently from other tracks of the stream. The color image is compatible with a three degrees of freedom rendering. Depth information and depth and color of residual points of the scene are encoded in separate tracks of the stream and are decoded only in case the decoder is configured to decode the scene for a volumetric rendering.
    Type: Application
    Filed: June 24, 2019
    Publication date: May 5, 2022
    Inventors: Julien FLEUREAU, Bertrand CHUPEAU, Gerard BRIAND, Renaud DORE, Thierry TAPIE, Franck THUDOR
  • Publication number: 20210195162
    Abstract: A method and device for encoding data representative of a 3D scene into a container and a corresponding method and device for decoding the encoded data are disclosed.
    Type: Application
    Filed: October 3, 2018
    Publication date: June 24, 2021
    Inventors: Bertrand CHUPEAU, Gerard BRIAND, Mary-Luc CHAMPEL
  • Patent number: 10891784
    Abstract: Method and device for generating a stream of data representative of a 3D point cloud. The 3D point cloud is partitioned into a plurality of 3D elementary parts. A set of two-dimensional 2D parametrizations is determined, each 2D parametrization representing one 3D part of the point cloud with a set of parameters. Each 3D part is represented as a 2D pixel image. A depth map and a color map are determined as a first patch atlas and a second patch atlas. A data stream is generated by combining and/or coding the parameters of the 2D parametrization, the first patch atlas, the second patch atlas and mapping information that links each 2D parametrization with its associated depth map and color map in respectively the first and second patch atlas.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: January 12, 2021
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Renaud Dore, Franck Galpin, Gerard Briand, Julien Fleureau
  • Publication number: 20190371051
    Abstract: Method and device for generating a stream of data representative of a 3D point cloud. The 3D point cloud is partitioned into a plurality of 3D elementary parts. A set of two-dimensional 2D parametrizations is determined, each 2D parametrization representing one 3D part of the point cloud with a set of parameters. Each 3D part is represented as a 2D pixel image. A depth map and a color map are determined as a first patch atlas and a second patch atlas. A data stream is generated by combining and/or coding the parameters of the 2D parametrization, the first patch atlas, the second patch atlas and mapping information that links each 2D parametrization with its associated depth map and color map in respectively the first and second patch atlas.
    Type: Application
    Filed: January 8, 2018
    Publication date: December 5, 2019
    Inventors: Renaud Dore, Franck Galpin, Gerard Briand, Julien Fleureau
  • Publication number: 20190251735
    Abstract: Method and device for generating a stream from image(s) of an object, comprising: obtaining data associated with points of a point cloud representing at least a part of the object; obtaining a parametric surface according to at least a geometric characteristic associated with the at least a part of the object and pose information of an acquisition device used to acquire the at least one image; obtaining a height map and one or more texture maps associated with the parametric surface; generating the stream by combining together a first syntax element relative to the at least a parameter, a second syntax element relative to the height map, a third syntax element relative to the at least one texture map and a fourth syntax element relative to a position of the acquisition device. The disclosure relates further to a method and device for rendering an image of the object from the stream thus obtained.
    Type: Application
    Filed: September 7, 2017
    Publication date: August 15, 2019
    Inventors: Julien FLEUREAU, Gerard BRIAND, Renaud DORE
  • Patent number: 9569884
    Abstract: As to generate shadows in an image, the method comprises the steps of: Computing a depth-map that comprises an array of pixels, wherein pixels in the depth-map are associated to a single value corresponding to depth value that indicates a depth from a light source to a portion of nearest occluding object visible through the pixel, projecting a point visible through a pixel of said image into a light space, the result of said projection being a pixel of said depth-map, calculating a distance between the said visible point and the light source, fetching the depth value associated to said pixel of depth-map, computing, for said pixel of said image, an adaptive bias as a function of a predetermined base bias and a relationship between the normal of a surface on which the said visible point is located and incident light direction at said visible point, comparing for said pixel in the image, the distance between said visible point and the light source with the sum of the corresponding depth map value and said ad
    Type: Grant
    Filed: March 26, 2010
    Date of Patent: February 14, 2017
    Assignee: THOMSON LICENSING
    Inventors: Pascal Gautron, Jean-Eudes Marvie, Gerard Briand
  • Patent number: 9082221
    Abstract: A method for real time construction of a video sequence comprising a modelled 3D object is provided. The method comprises pre-calculating data representative of a first image of a three-dimensional environment and a first associated depth information. A live calculating of data representative of a second image representing the modelled object on which is mapped a current image of a live video stream, and a second depth information associated with the second image is performed. The sequence is composed by combining the first image and the second image according to the first and second depth information.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: July 14, 2015
    Assignee: THOMSON LICENSING
    Inventors: Jean-Eudes Marvie, Gerard Briand
  • Patent number: 8164591
    Abstract: The invention concerns a device for generating mutual photometric effects and a server for delivering photometric parameters for generating mutual photometric effects and a system including such a device and such a server. The device comprises a receiver for receiving and demultiplexing the visual data sets and photometric parameters respectively associated with the data sets, a module for defining the mutual photometric effects to be generated for these photometric parameters, a compositor and a rendering module for positioning the visual data sets in the common support space and applying the effects defined for the photometric parameters from at least one of the visual data sets to at least one other of the visual data sets so that at least one visual data set influences one other visual data set in the common support space.
    Type: Grant
    Filed: April 30, 2002
    Date of Patent: April 24, 2012
    Assignee: Thomson Licensing
    Inventors: Jürgen Stauder, Bertrand Chupeau, Gérard Briand
  • Publication number: 20120001911
    Abstract: As to generate shadows in an image, the method comprises the steps of: Computing a depth-map that comprises an array of pixels, wherein pixels in the depth-map are associated to a single value corresponding to depth value that indicates a depth from a light source to a portion of nearest occluding object visible through the pixel, projecting a point visible through a pixel of said image into a light space, the result of said projection being a pixel of said depth-map, calculating a distance between the said visible point and the light source, fetching the depth value associated to said pixel of depth-map, computing, for said pixel of said image, an adaptive bias as a function of a predetermined base bias and a relationship between the normal of a surface on which the said visible point is located and incident light direction at said visible point, comparing for said pixel in the image, the distance between said visible point and the light source with the sum of the corresponding depth map value and said ad
    Type: Application
    Filed: March 26, 2010
    Publication date: January 5, 2012
    Applicant: THOMSON LICENSING
    Inventors: Pascal Gautron, Jean-Eudes Marvie, Gerard Briand
  • Publication number: 20110090307
    Abstract: The invention relates to a method for live construction of a video sequence comprising a modelled 3D object, the method comprising the following steps: pre-calculating data representative of at least one first image of a three-dimensional environment and a first item of associated depth information, live calculation of: data representative of at least one second image representing said modelled object on which is mapped a current image of a live video stream, and a second item of depth information associated with said at least one second image, and composing live said sequence by combining said at least one first image and said at least one second image according to said first and second items of depth information.
    Type: Application
    Filed: June 29, 2009
    Publication date: April 21, 2011
    Inventors: Jean-Eudes Marvie, Gerard Briand
  • Patent number: 7289157
    Abstract: The process is characterized in that it performs: counting of the number of motion vectors, at least one component of which is greater than a predetermined value less than the maximum value, comparision of this number with at least one predetermined threshold, and in that the motion vector field is declared saturating if this number is greater than the predetermined threshold. Application relates to image interpolation.
    Type: Grant
    Filed: May 9, 2001
    Date of Patent: October 30, 2007
    Assignee: Thomson Licensing
    Inventors: Gérard Briand, Juan Moronta, Alain Verdier
  • Patent number: 7194034
    Abstract: A method of detecting the reliability of a field of movement vectors of one image in a sequence of video images. The method includes a stage of calculating a stability parameter, Det_Stab(t), for the field. The parameter is based on a comparison (4), over two successive images, of the number of occurrences of the majority vectors of the movement-vectors fields of each of these images. A field is defined as stable if the variation in the number of occurrences lies within a predefined bracket. Reliability (7) is decided on the basis of this stability parameter.
    Type: Grant
    Filed: February 13, 2002
    Date of Patent: March 20, 2007
    Assignee: Thomas Licensing
    Inventors: Gérard Briand, Juan Moronta, Alain Verdier
  • Patent number: 7190842
    Abstract: The present invention relates to an elementary cell of a linear filter for image processing, as well as to a corresponding module, element and process. The cell comprises a data circulation output and a calculation output, as well as a main delay line and an auxiliary delay line in parallel. Delay line selection means (MUX4) make it possible to link the input of the cell to the circulation output by way of one or other of the delay lines. The cell also comprises an adder having two inputs which can be linked respectively to the input of the cell and to the output of the main delay line, by calculation selection means (MUX1, MUX2) and a multiplier at the output of the adder, connected to a multiplier coefficients memory. Application to linear filtering for image processing and to random access for motion compensation.
    Type: Grant
    Filed: March 28, 2001
    Date of Patent: March 13, 2007
    Assignee: Thomson Licensing
    Inventors: Gérard Briand, Jean-Yves Babonneau, Didier Doyen, Patrice Lesec