Patents by Inventor Philippe Guillotel

Philippe Guillotel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11964200
    Abstract: In a particular implementation, a user environment space for haptic feedback and interactivity (HapSpace) is proposed. In one embodiment, the HapSpace is a virtual space attached to the user and is defined by the maximum distance that the user's body can reach. The HapSpace may move as the user moves. Haptic objects and haptic devices, and the associated haptic properties, may also be defined within the HapSpace. New descriptors, such as those enable precise locations and link between the user and haptic objects/devices are defined for describing the HapSpace.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: April 23, 2024
    Assignee: InterDigital CE Patent Holdings, SAS
    Inventors: Philippe Guillotel, Fabien Danieau, Julien Fleureau, Didier Doyen
  • Publication number: 20230418381
    Abstract: A haptic rendering device and corresponding rendering method allow to render a haptic effect defined in a haptic signal comprising information representative of an immersive scene description. The immersive scene comprises information representative of at least one element of the scene and information representative of a haptic object, comprising a type of haptic effect, at least one parameter of the haptic effect, and a haptic volume or surface where the haptic effect is active. The parameter of the haptic effect may be a haptic texture map. A corresponding syntax is proposed.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 28, 2023
    Inventors: Fabien Danieau, Quentin Galvane, Philippe Guillotel
  • Publication number: 20230367395
    Abstract: A haptic rendering device and corresponding method allows to render a haptic effect described by metadata comprising, for at least one haptic channel, information representative of a geometric model, and information representative of an element of the geometric model where to apply the haptic feedback, and wherein an associated haptic file comprises the haptic signal to be applied. A file format for carrying the required information is provided.
    Type: Application
    Filed: September 6, 2021
    Publication date: November 16, 2023
    Inventors: Philippe Guillotel, Fabien Danieau, Quentin Galvane
  • Publication number: 20230171421
    Abstract: For a bi-prediction block, the initial motion field can be refined using a DNN. In one implementation, the initial motion field is integer rounded to obtain initial prediction blocks. Based on the initial prediction, the DNN can generate motion refinement information, which is scaled and added to the sub-pel residual motion from the initial motion field to generate a refined motion field. The scaling factor can take a default value, or be based on the motion asymmetry. While the initial motion field is usually block based on sub-block based, the refined motion field is pixel based or sub-block based and can be at an arbitrary accuracy. The same refinement process is performed at both the encoder and decoder, and thus the motion refinement information need not to be signaled. Whether the refinement is enabled can be determined based on the initial motion, the block activities and the block size.
    Type: Application
    Filed: May 18, 2021
    Publication date: June 1, 2023
    Inventors: Franck GALPIN, Philippe BORDES, Philippe GUILLOTEL, Xuan Hien PHAM
  • Patent number: 11184581
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 23, 2021
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Philippe Guillotel, Martin Alain, Erik Reinhard, Jean Begaint, Dominique Thoreau, Joaquin Zepeda Salvatierra
  • Patent number: 10877561
    Abstract: An apparatus and method is provided in which pressure sensors are disposed in a configuration such as a matrix format on the surface of the user input device. The processor generates a proxy on the image having a plurality of points disposed in a corresponding configuration. The proxy points are associated with each pressure sensor locations. The processor then generates an output effect responsive to input signal received from one or more pressure sensors of the input device.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: December 29, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Fabien Danieau, Antoine Costes, Edouard Callens, Philippe Guillotel
  • Publication number: 20200382742
    Abstract: A content stream comprising video and synchronized illumination data is based on a reference lighting setup from, for example, the site of the content creation. The content stream is received at a user location where the illumination data controls user lighting that is synchronized with the video data, so that when the video data is displayed the user's lighting is in synchronization with the video. In one embodiment, the illumination data is also synchronized with events of a game, so that a user playing games in a gaming environment will have his lighting synchronized with video and events of the game. In another embodiment, the content stream is embedded on a disk.
    Type: Application
    Filed: November 28, 2017
    Publication date: December 3, 2020
    Inventors: Philippe GUILLOTEL, Martin ALAIN, Erik REINHARD, Jean BEGAINT, Dominique THOREAU, Joaquin ZEPEDA SALVATIERRA
  • Patent number: 10785502
    Abstract: The present disclosure generally relates to a method for predicting at least one block of pixels of a view (170) belonging to a matrix of views (17) obtained from light-field data belong with a scene. According to present disclosure, the method is implemented by a processor and comprises for at least one pixel to predict of said block of pixels: —from said matrix of views (17), obtaining (51) at least one epipolar plane image (EPI) belong with said pixel to predict, —among a set of unidirectional prediction modes, determining (52) at least one optimal unidirectional prediction mode from a set of previous reconstructed pixels neighboring said pixel to predict in said at least one epipolar plane image, —extrapolating (53) a prediction value of said pixel to predict by using said at least one optimal unidirectional prediction mode.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: September 22, 2020
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Dominique Thoreau, Martin Alain, Mehmet Turkan, Philippe Guillotel
  • Patent number: 10672104
    Abstract: A method and apparatus for generating an extrapolated image from an existing film or video content, which can be displayed beyond the borders of the existing file or video content to increase viewer immersiveness, are provided. The present principles provide to generating the extrapolated image without salient objects included therein, that is, objects that may distract the viewer from the main image. Such an extrapolated image is generated by determining salient areas and generating the extrapolated image with lesser salient objects included in its place. Alternatively, salient objects can be detected in the extrapolated image and removed. Additionally, selected salient objects may be added to the extrapolated image.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: June 2, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Fabrice Urban, Philippe Guillotel, Laura Turban
  • Publication number: 20200099955
    Abstract: Encoding or decoding a stack of images of a same scene focused at different focalization distances from one image to another can involve encoding or decoding information representing an image of the stack of images, where the image meets an image sharpness criterion, reconstructing the image into a reconstructed image, and encoding or decoding at least one other image of the stack of images by prediction from at least the reconstructed image.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Inventors: Philippe Guillotel, Dominique Thoreau, Benoit Vandame, Patrick Lopez, Guillaume Boisson
  • Patent number: 10593027
    Abstract: A method for processing at least one peripheral image that when displayed extends beyond the borders of a displayed central image is disclosed. The method includes adapting luminance of the peripheral image to human vision characteristics when the luminance of peripheral images is processed so that the rendered light from the peripheral image in the viewer field of view remains low and close to the light rendered by the central view only. According to a first embodiment, the method includes adapting luminance of the peripheral image to a reference reflectance level by applying a light correction function to the input luminance when such light correction function is obtained by measuring a rendered luminance level of the displayed peripheral image adapted to the reference reflectance level of the surface where is displayed the peripheral image. According to a second embodiment, the luminance is further adapted to real reflectance with respect to reference reflectance.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: March 17, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Philippe Guillotel, Laura Turban, Fabrice Urban
  • Patent number: 10536718
    Abstract: A method for encoding a current focal stack is disclosed that comprises a set of images focused at a different focalization distance from one image to another. According to present disclosure, the method comprises: —encoding (31) information representing an image of the current focal stack, the image being selected in said current focal stack according to an image sharpness criterion, and reconstructing the image into a reconstructed image; —encoding (32) at least another image of the current focal stack by prediction from at least the reconstructed image.
    Type: Grant
    Filed: September 5, 2016
    Date of Patent: January 14, 2020
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Philippe Guillotel, Dominique Thoreau, Benoit Vandame, Patrick Lopez, Guillaume Boisson
  • Patent number: 10536682
    Abstract: The present disclosure relates to a method for reproducing an item of video content filmed using a camera. An item of video content composed of sequences is developed and enhanced by commands applied to the camera at the time of filming. With a view to reproduction, the video content is divided into sequences. The commands applied to the camera are extracted for each sequence and make it possible to calculate at least one haptic actuator control parameter associated with this sequence. At the time of the reproduction of a sequence of the video content, at least one control parameter thus calculated controls at least one haptic actuator. In this way, the spectator perceives stimuli making it possible to enhance his perception of the video document during the reproduction. Advantageously, the player able to reproduce the enhanced content determines a cinematographic effect for a set of sequences from commands applied to the camera.
    Type: Grant
    Filed: February 24, 2014
    Date of Patent: January 14, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Fabien Danieau, Julien Fleureau, Philippe Guillotel, Nicolas Mollet, Antole Lecuyer, Marc Christie
  • Patent number: 10440446
    Abstract: The invention relates to a method for generating haptic coefficients associated with an audiovisual document. Initially, data is extracted from an audio and/or video track and is used to calculate at least one first group of haptic coefficients from an autoregressive model applied to the read data. These haptic coefficients are designed to program a filter supplying at the output the control parameters for controlling at least one haptic actuator. Then, a “haptic” sequence of the audiovisual document is determined and calculated haptic parameters are associated with the determined sequence. In this manner, the haptic parameters enabling the control of one or more actuators are easily calculated and easily reproducible. Advantageously, the data used for the calculation is extracted from the selected sequence.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: October 8, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Fabien Danieau, Philippe Guillotel
  • Patent number: 10297009
    Abstract: Apparatus and Method for Generating an Extrapolated Image Using a Recursive Hierarchical Process A method and apparatus for generating an extrapolated image from an existing film or video content, which can be displayed beyond the borders of the existing film or video content to increase viewer immersiveness, are provided. The present principles provide for hierarchical processing in which higher resolution images are generated at each higher level, and wherein the higher level image is generated based on prediction and weighting derived from a current level image, and the current level is refined for the prediction based on overlapping data.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: May 21, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Fabrice Urban, Philippe Guillotel, Laura Turban
  • Patent number: 10271060
    Abstract: A method for generating at least one image with a first dynamic range, from an image with a second dynamic range, which is lower than the first dynamic range is described. The method includes obtaining an epitome of the image with a first dynamic range, called a first epitome. Thereafter, the image with a first dynamic range is generated from the image with a second dynamic range and the first epitome.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: April 23, 2019
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Philippe Guillotel, Martin Alain, Dominique Thoreau, Mehmet Turkan
  • Publication number: 20190064926
    Abstract: An apparatus and method is provided in which pressure sensors are disposed in a configuration such as a matrix format on the surface of the user input device. The processor generates a proxy on the image having a plurality of points disposed in a corresponding configuration. The proxy points are associated with each pressure sensor locations. The processor then generates an output effect responsive to input signal received from one or more pressure sensors of the input device.
    Type: Application
    Filed: August 22, 2018
    Publication date: February 28, 2019
    Inventors: Fabien DANIEAU, Antoine COSTES, Edouard CALLENS, Philippe GUILLOTEL
  • Patent number: 10133547
    Abstract: A method and device for obtaining a sound, wherein an information representative of a speed of a first object moving on a first surface is obtained. The obtained speed information is used with one or more reference sounds to obtain the sound. The one or more reference sounds are associated with a determined speed of displacement of a second object moving on a second surface, the first surface being different from the second surface.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: November 20, 2018
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Yoan Lefevre, Philippe Guillotel
  • Patent number: 10109056
    Abstract: A method and device for eye gaze estimation with regard to a sequence of images. The method comprises receiving a sequence of first video images and a corresponding sequence of first eye images of a user watching at the first video images; determining first saliency maps associated with at least a part of the first video images; estimating associated first gaze points from the first saliency maps associated with the video images associated with the first eye images; storing of pairs of first eye images/first gaze points in a database; for a new eye image, called second eye image, estimating an associated second gaze point from the estimated first gaze points and from a second saliency map associated with a second video image associated with the second eye image; storing the second eye image and its associated second gaze point in the database.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: October 23, 2018
    Assignee: Thomson Licensing
    Inventors: Phi Bang Nguyen, Julien Fleureau, Christel Chamaret, Philippe Guillotel
  • Publication number: 20180300587
    Abstract: Clustering patches of a degraded version of an image is such that the clusters are based on errors computed between patches of the degraded version processed with upgrade functions associated respectively with the clusters and patches of a full graded version of the image corresponding respectively to the patches of the degraded version. The source image is thus used to determine clusters for the image to restore. A restoration framework based on a clustering according to the present principles is also disclosed.
    Type: Application
    Filed: October 11, 2016
    Publication date: October 18, 2018
    Inventors: Martin ALAIN, Christine GUILLEMOT, Dominique THOREAU, Philippe GUILLOTEL