Patents by Inventor Philippe Guillotel

Philippe Guillotel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250252830
    Abstract: A data structure for an immersive scene description comprises information representative of a haptic effect based on haptic texture and an additional information field to determine how to interpret haptic textures, thus allowing to differentiate between the cases where a pixel represents directly the value of the haptic effect or where a pixel references a haptic signal representing the haptic effect. The additional information may also carry information to select a bit depth and a range for a haptic property amongst a set of different settings.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 7, 2025
    Inventors: Quentin Galvane, Philippe Guillotel, Franck Galpin
  • Publication number: 20250138639
    Abstract: A coding method allows to compress a haptic signal of a haptic effect. Compression parameters are determined at least based on a location where a haptic effect is to be applied. Information representative of the haptic effect comprises the compressed signal and the location. The location may be based on body segmentation, or vertex-based or texture-based. Corresponding decoding method, coding device, decoding device, computer program, non-transitory computer readable medium and system are described.
    Type: Application
    Filed: September 23, 2022
    Publication date: May 1, 2025
    Inventors: Quentin Galvane, Fabien Danieau, Philippe Guillotel
  • Publication number: 20250044875
    Abstract: A haptic rendering device and corresponding method allows to render a haptic effect described by metadata comprising, for at least one haptic channel, information representative of a geometric model, and information representative of an element of the geometric model where to apply the haptic feedback, and wherein an associated haptic file comprises the haptic signal to be applied. A file format for carrying the required information is provided.
    Type: Application
    Filed: October 24, 2024
    Publication date: February 6, 2025
    Inventors: Philippe Guillotel, Fabien Danieau, Quentin Galvane
  • Publication number: 20250044871
    Abstract: A data structure storing information representative of haptic effects comprises a set of haptic effects and a timeline. Haptic effects may be defined in a flexible manner, either directly within the timeline or referenced by an identifier within the timeline. A library may store definitions of the effects with associated identifiers. Data restructuring processes are proposed to convert from a streaming-friendly format where the library is not used to an edition friendly format with the use of a library, and vice versa.
    Type: Application
    Filed: November 8, 2022
    Publication date: February 6, 2025
    Inventors: Quentin Galvane, Fabien Danieau, Philippe Guillotel
  • Publication number: 20250036203
    Abstract: A data structure storing information representative of an immersive experience comprises information representative of the haptic capabilities of a reference haptic rendering device. Therefore, haptic signals may be adapted to the capabilities of a different rendering device with different haptic capabilities. Such adaptation may be performed directly at the rendering stage by the rendering device itself. A transcoding process allows the generation of a new data structure for a rendering device with different haptic capabilities.
    Type: Application
    Filed: November 17, 2022
    Publication date: January 30, 2025
    Inventors: Quentin Galvane, Fabien Danieau, Philippe Guillotel
  • Patent number: 12158989
    Abstract: A haptic rendering device and corresponding method allows to render a haptic effect described by metadata comprising, for at least one haptic channel, information representative of a geometric model, and information representative of an element of the geometric model where to apply the haptic feedback, and wherein an associated haptic file comprises the haptic signal to be applied. A file format for carrying the required information is provided.
    Type: Grant
    Filed: September 6, 2021
    Date of Patent: December 3, 2024
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Philippe Guillotel, Fabien Danieau, Quentin Galvane
  • Patent number: 12149726
    Abstract: For a bi-prediction block, the initial motion field can be refined using a DNN. In one implementation, the initial motion field is integer rounded to obtain initial prediction blocks. Based on the initial prediction, the DNN can generate motion refinement information, which is scaled and added to the sub-pel residual motion from the initial motion field to generate a refined motion field. The scaling factor can take a default value, or be based on the motion asymmetry. While the initial motion field is usually block based on sub-block based, the refined motion field is pixel based or sub-block based and can be at an arbitrary accuracy. The same refinement process is performed at both the encoder and decoder, and thus the motion refinement information need not to be signaled. Whether the refinement is enabled can be determined based on the initial motion, the block activities and the block size.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: November 19, 2024
    Assignee: InterDigital Madison Patent Holdings, SAS
    Inventors: Franck Galpin, Philippe Bordes, Philippe Guillotel, Xuan Hien Pham
  • Patent number: 12142013
    Abstract: Methods and devices for encoding and decoding a data stream representative of a 3D volumetric scene comprising haptic features associated with objects of the 3D scene are disclosed. At the encoding, haptic features are associated with objects of the scene, for instance as haptic maps. Haptic components are stored in points of the 3D scene as color may be. These components are projected onto patch pictures which are packed in atlas images. At the decoding, haptic components are un-projected onto reconstructed points as color may be according to the depth component of pixels of the decoded atlases.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: November 12, 2024
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Fabien Danieau, Julien Fleureau, Gaetan Moisson-Franckhauser, Philippe Guillotel
  • Publication number: 20240249489
    Abstract: A method and an apparatus for generating a 3D face comprising at least one region deformed according to a deformation style are provided. 3D data representative of at least one region of a first 3D face is provided as input to a neural network-based geometry deformation generator. Data representative of the deformation style is provided as input to the neural network-based geometry deformation generator. The 3D face comprising the at least one deformed region is obtained from the neural network-based geometry deformation generator, wherein the at least one deformed region includes geometry deformations representative of the deformation style.
    Type: Application
    Filed: May 16, 2022
    Publication date: July 25, 2024
    Inventors: Nicolas Olivier, Fabien Danieau, Quentin Avril, Philippe Guillotel, Ferran Argelaguet Sanz, Anatole Lecuyer, Franck Multon, Ludovic Hoyet
  • Patent number: 11964200
    Abstract: In a particular implementation, a user environment space for haptic feedback and interactivity (HapSpace) is proposed. In one embodiment, the HapSpace is a virtual space attached to the user and is defined by the maximum distance that the user's body can reach. The HapSpace may move as the user moves. Haptic objects and haptic devices, and the associated haptic properties, may also be defined within the HapSpace. New descriptors, such as those enable precise locations and link between the user and haptic objects/devices are defined for describing the HapSpace.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: April 23, 2024
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Philippe Guillotel, Fabien Danieau, Julien Fleureau, Didier Doyen
  • Publication number: 20230418381
    Abstract: A haptic rendering device and corresponding rendering method allow to render a haptic effect defined in a haptic signal comprising information representative of an immersive scene description. The immersive scene comprises information representative of at least one element of the scene and information representative of a haptic object, comprising a type of haptic effect, at least one parameter of the haptic effect, and a haptic volume or surface where the haptic effect is active. The parameter of the haptic effect may be a haptic texture map. A corresponding syntax is proposed.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 28, 2023
    Inventors: Fabien Danieau, Quentin Galvane, Philippe Guillotel
  • Publication number: 20230367395
    Abstract: A haptic rendering device and corresponding method allows to render a haptic effect described by metadata comprising, for at least one haptic channel, information representative of a geometric model, and information representative of an element of the geometric model where to apply the haptic feedback, and wherein an associated haptic file comprises the haptic signal to be applied. A file format for carrying the required information is provided.
    Type: Application
    Filed: September 6, 2021
    Publication date: November 16, 2023
    Inventors: Philippe Guillotel, Fabien Danieau, Quentin Galvane
  • Publication number: 20230171421
    Abstract: For a bi-prediction block, the initial motion field can be refined using a DNN. In one implementation, the initial motion field is integer rounded to obtain initial prediction blocks. Based on the initial prediction, the DNN can generate motion refinement information, which is scaled and added to the sub-pel residual motion from the initial motion field to generate a refined motion field. The scaling factor can take a default value, or be based on the motion asymmetry. While the initial motion field is usually block based on sub-block based, the refined motion field is pixel based or sub-block based and can be at an arbitrary accuracy. The same refinement process is performed at both the encoder and decoder, and thus the motion refinement information need not to be signaled. Whether the refinement is enabled can be determined based on the initial motion, the block activities and the block size.
    Type: Application
    Filed: May 18, 2021
    Publication date: June 1, 2023
    Inventors: Franck GALPIN, Philippe BORDES, Philippe GUILLOTEL, Xuan Hien PHAM
  • Patent number: 11184581
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 23, 2021
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Philippe Guillotel, Martin Alain, Erik Reinhard, Jean Begaint, Dominique Thoreau, Joaquin Zepeda Salvatierra
  • Patent number: 10877561
    Abstract: An apparatus and method is provided in which pressure sensors are disposed in a configuration such as a matrix format on the surface of the user input device. The processor generates a proxy on the image having a plurality of points disposed in a corresponding configuration. The proxy points are associated with each pressure sensor locations. The processor then generates an output effect responsive to input signal received from one or more pressure sensors of the input device.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: December 29, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Fabien Danieau, Antoine Costes, Edouard Callens, Philippe Guillotel
  • Publication number: 20200382742
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Application
    Filed: November 28, 2017
    Publication date: December 3, 2020
    Inventors: Philippe GUILLOTEL, Martin ALAIN, Erik REINHARD, Jean BEGAINT, Dominique THOREAU, Joaquin ZEPEDA SALVATIERRA
  • Patent number: 10785502
    Abstract: The present disclosure generally relates to a method for predicting at least one block of pixels of a view (170) belonging to a matrix of views (17) obtained from light-field data belong with a scene. According to present disclosure, the method is implemented by a processor and comprises for at least one pixel to predict of said block of pixels: —from said matrix of views (17), obtaining (51) at least one epipolar plane image (EPI) belong with said pixel to predict, —among a set of unidirectional prediction modes, determining (52) at least one optimal unidirectional prediction mode from a set of previous reconstructed pixels neighboring said pixel to predict in said at least one epipolar plane image, —extrapolating (53) a prediction value of said pixel to predict by using said at least one optimal unidirectional prediction mode.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: September 22, 2020
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Dominique Thoreau, Martin Alain, Mehmet Turkan, Philippe Guillotel
  • Patent number: 10672104
    Abstract: A method and apparatus for generating an extrapolated image from an existing film or video content, which can be displayed beyond the borders of the existing file or video content to increase viewer immersiveness, are provided. The present principles provide to generating the extrapolated image without salient objects included therein, that is, objects that may distract the viewer from the main image. Such an extrapolated image is generated by determining salient areas and generating the extrapolated image with lesser salient objects included in its place. Alternatively, salient objects can be detected in the extrapolated image and removed. Additionally, selected salient objects may be added to the extrapolated image.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: June 2, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Fabrice Urban, Philippe Guillotel, Laura Turban
  • Publication number: 20200099955
    Abstract: Encoding or decoding a stack of images of a same scene focused at different focalization distances from one image to another can involve encoding or decoding information representing an image of the stack of images, where the image meets an image sharpness criterion, reconstructing the image into a reconstructed image, and encoding or decoding at least one other image of the stack of images by prediction from at least the reconstructed image.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Inventors: Philippe Guillotel, Dominique Thoreau, Benoit Vandame, Patrick Lopez, Guillaume Boisson
  • Patent number: 10593027
    Abstract: A method for processing at least one peripheral image that when displayed extends beyond the borders of a displayed central image is disclosed. The method includes adapting luminance of the peripheral image to human vision characteristics when the luminance of peripheral images is processed so that the rendered light from the peripheral image in the viewer field of view remains low and close to the light rendered by the central view only. According to a first embodiment, the method includes adapting luminance of the peripheral image to a reference reflectance level by applying a light correction function to the input luminance when such light correction function is obtained by measuring a rendered luminance level of the displayed peripheral image adapted to the reference reflectance level of the surface where is displayed the peripheral image. According to a second embodiment, the luminance is further adapted to real reflectance with respect to reference reflectance.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: March 17, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Philippe Guillotel, Laura Turban, Fabrice Urban