Patents by Inventor Didier Doyen

Didier Doyen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11964200
    Abstract: In a particular implementation, a user environment space for haptic feedback and interactivity (HapSpace) is proposed. In one embodiment, the HapSpace is a virtual space attached to the user and is defined by the maximum distance that the user's body can reach. The HapSpace may move as the user moves. Haptic objects and haptic devices, and the associated haptic properties, may also be defined within the HapSpace. New descriptors, such as those enable precise locations and link between the user and haptic objects/devices are defined for describing the HapSpace.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: April 23, 2024
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Philippe Guillotel, Fabien Danieau, Julien Fleureau, Didier Doyen
  • Patent number: 11962745
    Abstract: A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: April 16, 2024
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Didier Doyen, Franck Galpin, Guillaume Boisson
  • Publication number: 20230393525
    Abstract: Processing image information associated with a 3D scene can involve obtaining image data associated with at least one layer of the 3D scene; determining at least one phase increment distribution associated with the at least one layer for modifying at the at least one layer an image size associated with the scene; and determining a propagation of an image wave front, corresponding to the at least one layer, to a result layer at a distance from the scene to form a propagated image wave front at the result layer representing a hologram of the scene, wherein determining the propagation includes applying the at least one phase increment distribution associated with the at least one layer to the image wave front at the at least one layer.
    Type: Application
    Filed: October 19, 2021
    Publication date: December 7, 2023
    Applicant: InterDigital CE Patent Holdings, SAS
    Inventors: Vincent Brac De La Perriere, Didier Doyen, Valter Drazic, Arno Schubert, Benoit Vandame
  • Patent number: 11803980
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: October 31, 2023
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Publication number: 20230326128
    Abstract: A device, an apparatus and associated methods are provided. In one embodiment, the method comprises obtaining a multi-plane image (MPI) representation of a three dimensional (3D) scene. The MPI representation includes a plurality of slices of content from the 3D scene, each slice corresponding to a different depth relative to a position of a first virtual camera. Each slice is decomposed into regular tiles; and the orientation of each tile is determined.
    Type: Application
    Filed: September 24, 2021
    Publication date: October 12, 2023
    Applicant: InterDigital CE Patent Holdings, SAS
    Inventors: Benoit Vandame, Didier Doyen, Frederic Babon, Remy Gendrot
  • Publication number: 20230215030
    Abstract: An apparatus and a method are provided for image processing. In one embodiment, the method comprises accessing a plurality of images captured by at least a reference camera, wherein the images represent a plurality of views corresponding to said same scene. A plurality of plane sweep volume (PSV) slices are then generated from said images and computing for each slice a flow map from at least the reference camera calibration parameter and this flow map a previous slice of the plane sweep volume is generated.
    Type: Application
    Filed: June 7, 2021
    Publication date: July 6, 2023
    Inventors: Guillaume Boisson, Tomas Volker, Bertrand Chupeau, Didier Doyen
  • Publication number: 20230186522
    Abstract: To represent a 3D scene, the MPI format uses a set of fronto-parallel planes. Different from MPI, the current MIV standard accepts a 3D scene represented as sequence input pairs of texture and depth pictures as input. To enable transmission of an MPI cube via the MIV-V3C standard, in one embodiment, an MPI cube is divided into empty regions and local MPI partitions that contain 3D objects. Each partition in the MPI cube can be projected to one or more patches. For a patch, the geometry is generated as well as the texture attribute and alpha attributes, and the alpha attributes may be represented as a peak and a width of an impulse. In another embodiment, an MPI RGBA layer of the MPI is cut into sub-images. Each sub-image may correspond to a patch, and the RGB and alpha information of the sub-image are assigned to the patch.
    Type: Application
    Filed: April 27, 2021
    Publication date: June 15, 2023
    Inventors: Renaud DORE, Bertrand CHUPEAU, Benoit VANDAME, Julien FLEUREAU, Didier DOYEN
  • Publication number: 20230179799
    Abstract: Predicting a component of a current pixel belonging to a current sub-aperture image in a matrix of sub-aperture images captured by a sensor of a type I plenoptic camera can involve, first, determining a location on the sensor based on: a distance from an exit pupil of a main lens of the camera to a micro-lens array of the camera; a focal length of the main lens; a focal length of the micro-lenses of the micro-lens array; and a set of parameters of a model of the camera allowing for a derivation of a two-plane parameterization describing the field of rays corresponding to the pixels of the sensor; and, second, predicting the component based on one reference pixel belonging to a reference sub-aperture image in the matrix and located on the sensor in a neighborhood of the location.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 8, 2023
    Applicant: InterDigital VC Holdings, Inc.
    Inventors: Didier Doyen, Olivier Bureller, Guillaume Boisson
  • Patent number: 11665369
    Abstract: The present disclosure relates to the transmission of sets of data and metadata and more particularly to the transmission of light-field contents. Light-field data take up large amounts of storage space which makes storage cumbersome and processing less efficient. In addition, light-field acquisition devices are extremely heterogeneous and each camera has its own proprietary file format. Since acquired light-field data from different cameras have a diversity of formats a complex processing is induced on the receiver side. To this end, it is proposed a method for encoding a signal representative of a light-field content in which the parameters representing the rays of light sensed by the different pixels of the sensor are mapped on the sensor. A second set of encoded parameters are used to reconstruct the light-field content from the parameters representing the rays of light sensed by the different pixels of the sensor.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: May 30, 2023
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Paul Kerbiriou, Didier Doyen, Sebastien Lasserre
  • Publication number: 20230088309
    Abstract: A device includes at least one first sensor and at least one second sensor for capturing first image data, the at least one first and second sensors being rectangular and arranged orthogonal to the at least one first rectangular sensor, and at least one hardware processor configured to cause the at least one first and second sensors, respectively, the first image data and the second image data at least substantially simultaneously, and at least one of display simultaneously data from the first image data and data from the second image data as a cross-shaped image or store together data from the first image data and data from the second image data as a cross-shaped image. The resulting first and second image data can be stored in a single file in memory. The at least one hardware processor can be processed to remove redundancies between the image data. The device can also extract from the first and second image data, image data corresponding to a rectangle parallel with an horizon.
    Type: Application
    Filed: February 3, 2021
    Publication date: March 23, 2023
    Inventors: Frederic Babon, Tristan Langlois, Guillaume Boisson, Didier Doyen
  • Publication number: 20230072247
    Abstract: A method and system are provided for processing image content. In one embodiment the method comprises receiving a plurality of captured contents showing same scene as captured by one or more cameras having a different focal length and depth maps and generating a consensus cube by obtaining depth map estimations from said received contents. The visibility of different objects in then analysed to create a soft visibility cube that provides visibility information about each content. A color cube is then generated by using information from the consensus and soft visibility cube. The color cube is then used to combine different received contents and generate a single image for the plurality of contents received.
    Type: Application
    Filed: February 19, 2021
    Publication date: March 9, 2023
    Inventors: Benoit Vandame, Didier Doyen, Guillaume Boisson
  • Patent number: 11570471
    Abstract: Predicting a component of a current pixel belonging to a current sub-aperture image in a matrix of sub-aperture images captured by a sensor of a type I plenoptic camera can involve, first, determining a location on the sensor based on: a distance from an exit pupil of a main lens of the camera to a micro-lens array of the camera; a focal length of the main lens; a focal length of the micro-lenses of the micro-lens array; and a set of parameters of a model of the camera allowing for a derivation of a two-plane parameterization describing the field of rays corresponding to the pixels of the sensor; and, second, predicting the component based on one reference pixel belonging to a reference sub-aperture image in the matrix and located on the sensor in a neighborhood of the location.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: January 31, 2023
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Didier Doyen, Olivier Bureller, Guillaume Boisson
  • Publication number: 20230019601
    Abstract: A method for decoding or encoding includes obtaining views parameters for a set of views comprising at least one reference view and a current view of a multi-views video content wherein each view comprises a texture layer and a depth layer. For at least one couple of a reference view and the current view of the set of views, an intermediate prediction image applying a forward projection method to pixels of the reference view is generated to project these pixels from a camera coordinates system of the reference view to a camera coordinates system of the current view, the prediction image comprising information allowing reconstructing image data. At least one final prediction image obtained from at least one intermediate prediction image is stored in a buffer of reconstructed images of the current view. A current image of the current view from the images stored in said buffer is reconstructed, said buffer comprising said at least one final prediction image.
    Type: Application
    Filed: November 30, 2020
    Publication date: January 19, 2023
    Inventors: Didier Doyen, Franck Galpin, Guillaume Boisson
  • Publication number: 20220360771
    Abstract: Various embodiments relate to a video coding system in which some elements required for decoding are generated according to a process that not specified within the video coding system. This process is hereafter referenced to as being the “external” process. This external process may generate “external” reference pictures to be used by a decoder that is adapted to use these external pictures. Encoding method, decoding method, encoding apparatus, decoding apparatus based on this post-processing method are proposed.
    Type: Application
    Filed: September 18, 2020
    Publication date: November 10, 2022
    Inventors: Philippe Bordes, Didier Doyen, Franck Galpin, Michel Kerdranvat
  • Publication number: 20220311986
    Abstract: A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.
    Type: Application
    Filed: September 29, 2020
    Publication date: September 29, 2022
    Inventors: Didier Doyen, Franck Galpin, Guillaume Boisson
  • Publication number: 20220156955
    Abstract: For multi-view video content represented in the MVD (Multi-view+Depth) format, the depth maps may be processed to improve the coherency therebetween. In one implementation, to process a target view based on an input view, pixels of the input view are first projected into the world coordinate system, then into the target view to form a projected view. The texture of the projected view and the texture of the target view are compared. If the difference at a pixel is small, then the depth of the target view at that pixel is adjusted, for example, replaced by the corresponding depth of the projected view. When the multi-view video content is encoded and decoded in a system, depth map processing may be applied in the pre-processing and post-processing modules to improve video compression efficiency and the rendering quality.
    Type: Application
    Filed: February 13, 2020
    Publication date: May 19, 2022
    Inventors: Didier DOYEN, Benoit VANDAME, Guillaume BOISSON
  • Patent number: 11257236
    Abstract: A method is proposed for estimating a depth for pixels in a matrix of M images. Such method comprises, at least for one set of N images among the M images, 2<N?M, a process comprising: —determining depth maps for the images in the set of N images delivering a set of N depth maps; —for at least one current pixel for which a depth has not yet been estimated: —deciding if a candidate depth corresponding to a depth value in the set of N depth maps is consistent or not with the other depth map(s) of the set of N depth maps; —selecting the candidate depth as being the estimated depth for the current pixel if the candidate depth is decided as consistent. The process is enforced iteratively with a new N value which is lower than the previous N value used in the previous iteration of the process.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: February 22, 2022
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Frederic Babon, Neus Sabater, Matthieu Hog, Didier Doyen, Guillaume Boisson
  • Publication number: 20220005216
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Application
    Filed: September 17, 2021
    Publication date: January 6, 2022
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Patent number: 11127146
    Abstract: The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: September 21, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Didier Doyen, Guillaume Boisson, Sylvain Thiebaud
  • Publication number: 20210279902
    Abstract: A method is proposed for estimating a depth for pixels in a matrix of M images. Such method comprises, at least for one set of N images among the M images, 2?N?M, a process comprising: —determining depth maps for the images in the set of N images delivering a set of N depth maps; —for at least one current pixel for which a depth has not yet been estimated: —deciding if a candidate depth corresponding to a depth value in the set of N depth maps is consistent or not with the other depth map(s) of the set of N depth maps; —selecting the candidate depth as being the estimated depth for the current pixel if the candidate depth is decided as consistent. The process is enforced iteratively with a new N value which is lower than the previous N value used in the previous iteration of the process.
    Type: Application
    Filed: July 17, 2019
    Publication date: September 9, 2021
    Inventors: Frederic Babon, Neus SABATER, Matthieu HOG, Didier DOYEN, Guillaume BOISSON