Patents by Inventor Patrick Perez

Patrick Perez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11735199
    Abstract: Method for modifying a style of an audio object, and corresponding electronic device, computer readable program products and computer readable storage medium The disclosure relates to a method for processing an input audio signal. According to an embodiment, the method includes obtaining a base audio signal being a copy of the input audio signal and generating an output audio signal from the base signal, the output audio signal having style features obtained by modifying the base signal so that a distance between base style features representative of a style of the base signal and a reference style feature decreases. The disclosure also relates to corresponding electronic device, computer readable program product and computer readable storage medium.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: August 22, 2023
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Quang Khanh Ngoc Duong, Alexey Ozerov, Eric Grinstein, Patrick Perez
  • Patent number: 11450166
    Abstract: A portable electronic voting machine is provided. The portable electronic voting machine comprises a smart panel configured to display voting process information; and a base station configured to house the smart panel. The electronic voting machine is configured to have the smart panel be removable from the base station and store voting information selected while the smart panel is removed from the base station.
    Type: Grant
    Filed: August 14, 2013
    Date of Patent: September 20, 2022
    Assignee: Hart Intercivic, Inc.
    Inventors: James M. Canter, Philip J. Nathan, Edward Patrick Perez, Denton L. Simpson, Drew Eldridge Tinney
  • Patent number: 11412275
    Abstract: A method provides translation of metadata related to enhancement of a video signal according to a first high dynamic range video distribution type into metadata related to enhancement of a video signal according to a second high dynamic range video distribution type. Translation is done between a value of a first metadata set corresponding to a first type of high dynamic range video and a value of a second metadata set corresponding to a second type of high dynamic range video and uses an association that may be stored in a lookup table that is determined according to differences between a test image reconstructed using the metadata of first type and the same image reconstructed using the metadata of second type. A receiver apparatus and a transmitter apparatus comprising the translation method are also disclosed.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: August 9, 2022
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Pierre Andrivon, Edouard Francois, Patrick Perez
  • Publication number: 20210274226
    Abstract: A method provides translation of metadata related to enhancement of a video signal according to a first high dynamic range video distribution type into metadata related to enhancement of a video signal according to a second high dynamic range video distribution type. Translation is done between a value of a first metadata set corresponding to a first type of high dynamic range video and a value of a second metadata set corresponding to a second type of high dynamic range video and uses an association that may be stored in a lookup table that is determined according to differences between a test image reconstructed using the metadata of first type and the same image reconstructed using the metadata of second type. A receiver apparatus and a transmitter apparatus comprising the translation method are also disclosed.
    Type: Application
    Filed: May 21, 2019
    Publication date: September 2, 2021
    Inventors: Pierre ANDRIVON, Edouard FRANCOIS, Patrick PEREZ
  • Patent number: 10880466
    Abstract: A method and system is provided for refocusing images captured by a plenoptic camera. In one embodiment the plenoptic camera is in processing with an audio capture device. The method comprises the steps of determining direction of a dominant audio source associated with an image; creating an audio zoom by filtering out all other audio signals except those associated with said dominant audio source; and performing automatic refocusing of said image based on said created audio zoom.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: December 29, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Valérie Allie, Pierre Hellier, Quang Khanh Ngoc Duong, Patrick Perez
  • Publication number: 20200286499
    Abstract: Method for modifying a style of an audio object, and corresponding electronic device, computer readable program products and computer readable storage medium The disclosure relates to a method for processing an input audio signal. According to an embodiment, the method includes obtaining a base audio signal being a copy of the input audio signal and generating an output audio signal from the base signal, the output audio signal having style features obtained by modifying the base signal so that a distance between base style features representative of a style of the base signal and a reference style feature decreases. The disclosure also relates to corresponding electronic device, computer readable program product and computer readable storage medium.
    Type: Application
    Filed: September 14, 2018
    Publication date: September 10, 2020
    Inventors: Quang Khanh Ngoc DUONG, Alexey OZEROV, Eric GRINSTEIN, Patrick PEREZ
  • Patent number: 10580210
    Abstract: A method for refocusing, on at least one common point of interest, the rendering of one set of plenoptic video data provided by one plenoptic device belonging to a set of plenoptic devices capturing simultaneously a same scene. According to the present disclosure, said method comprises: obtaining (21) a common 3D reference system used for spatially locating said plenoptic device that has provided said set of plenoptic video data and at least one other device of said set of plenoptic devices, from said at least one common point of interest, determining (22) common refocusing plane parameters in said common 3D reference system, refocusing (23) the rendering of said set of plenoptic video data by converting (231) said common refocusing plane parameters into a rendering refocusing plane of a 3D reference system associated with said plenoptic device.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: March 3, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Pierre Hellier, Valerie Allie, Patrick Perez
  • Patent number: 10310044
    Abstract: A computer-implemented method of characterizing molecular diffusion within a body from a set of diffusion-weighted magnetic resonance signals by computing a weighted average of a plurality of multi-compartment diffusion models comprises a same number of compartments, fitted to a set of diffusion-weighted magnetic resonance signals, the weighted average being computed using weights representative of a performance criterion of each of the models; wherein each of the multi-compartment diffusion models comprises a different number of subsets of compartments, the compartments of a same subset being identical to each other.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: June 4, 2019
    Assignees: UNIVERSITE DE RENNES 1, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE, INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE (INRIA), INSTITUT NATIONAL DE LA SANTE DE LA RECHERCHE MEDICALE (INSERM)
    Inventors: Patrick Perez, Olivier Commowick, Christian Barillot, Aymeric Stamm
  • Patent number: 10268368
    Abstract: Various systems and methods for determining a set of gesture components of touch input are provided. Touch data can be obtained (301), and a number of gesture components to be generated can be selected (302). A set of gesture components can be generated (303) based on the touch data and the number. For example, a sparse matrix decomposition can be used to generate the set of gesture components. The set of gesture components can be stored (304) in a non-transitory computer-readable medium.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: April 23, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Kiran Varanasi, Patrick Perez
  • Patent number: 10249046
    Abstract: A method for tracking an object commences by first establishing the object (12) in a current frame. Thereafter, a background region (202) is established encompassing the object in the current frame. The location for the object (12) is then estimated in a next frame. Next, the propagation of the background region (202) is determined. Finally, the object is segmented from its background based on propagation of the background region, thereby allowing tracking of the object from frame to frame.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: April 2, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Tomas Enrique Crivelli, Juan Manuel Perez Rua, Patrick Perez
  • Publication number: 20180374263
    Abstract: A method for refocusing, on at least one common point of interest, the rendering of one set of plenoptic video data provided by one plenoptic device belonging to a set of plenoptic devices capturing simultaneously a same scene. According to the present disclosure, said method comprises: obtaining (21) a common 3D reference system used for spatially locating said plenoptic device that has provided said set of plenoptic video data and at least one other device of said set of plenoptic devices, from said at least one common point of interest, determining (22) common refocusing plane parameters in said common 3D reference system, refocusing (23) the rendering of said set of plenoptic video data by converting (231) said common refocusing plane parameters into a rendering refocusing plane of a 3D reference system associated with said plenoptic device.
    Type: Application
    Filed: December 8, 2016
    Publication date: December 27, 2018
    Inventors: Pierre HELLIER, Valerie ALLIE, Patrick PEREZ
  • Patent number: 10147199
    Abstract: A method and an apparatus for determining an orientation of a video are suggested. The method comprises the steps of: estimating a motion of the video; extracting translation-based parameters from the estimated motion of the video; and computing at least one feature giving the evolution of the horizontal translation over time against the evolution of the vertical translation according to the translation based parameters, the feature being used for determining the orientation of the video.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: December 4, 2018
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Claire-Helene Demarty, Lionel Oisel, Patrick Perez
  • Publication number: 20180341805
    Abstract: In a particular implementation, a codebook C can be used for quantizing a feature vector of a database image into a quantization index, and then a different codebook (B) can be used to approximate the feature vector based on the quantization index. The codebooks B and C can have different sizes. Before performing image search, a lookup table can be built offline to include distances between the feature vector for a query image and codevectors in codebook B to speed up the image search. Using triplet constraints wherein a first image and a second image are indicated as a matching pair and the first image and a third image as non-matching, the codebooks B and C can be trained for the task of image search. The present principles can be applied to regular vector quantization, product quantization, and residual quantization.
    Type: Application
    Filed: November 4, 2016
    Publication date: November 29, 2018
    Inventors: Himalaya JAIN, Cagdas BILEN, Joaquin ZEPEDA SALVATIERRA, Patrick PEREZ
  • Patent number: 10114891
    Abstract: A method and a system of audio retrieval and source separation are described. The method comprises the steps of: receiving a textual query; retrieving a preliminary audio sample from an auxiliary audio database; retrieving a target audio sample from a target audio database; and separating the retrieved target audio sample into a plurality of audio source signals. The corresponding system comprises an input unit, a storing unit and a processing unit to implement the method.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: October 30, 2018
    Assignee: Thomson Licensing
    Inventors: Alexey Ozerov, Patrick Perez, Louis Chevallier, Lionel Oisel
  • Publication number: 20180308502
    Abstract: A method for processing an input signal having an audio component is described. The method includes obtaining a set of time parameters from a time frequency transformation of the audio component of the input signal, the audio component being a mixture of audio signals comprising at least one first audio signal of a first audio source; determining at least one motion feature of the first audio source from a visual sequence corresponding to the first audio signal; obtaining a weight vector of the set of time parameters based on the motion feature; and determining a time frequency transformation of the first audio signal based on the weight vector.
    Type: Application
    Filed: April 18, 2018
    Publication date: October 25, 2018
    Inventors: Sanjeel PAREKH, Alexey OZEROV, Quang Khanh Ngoc DUONG, Gael RICHARD, Slim ESSID, Patrick PEREZ
  • Publication number: 20180308258
    Abstract: A particular implementation determines color palettes of images by extracting and decomposing color palettes based on the image color content. The decomposition can produce a dictionary matrix, an activation matrix, or both. The dictionary matrix can be used in recoloring an image, either directly or after storing. Another implementation selects a color palette to recolor an image by accessing metadata associated with the image and estimating a scene type based on the metadata and/or other information. Color palettes are retrieved from memory corresponding to the scene type of the image and are used for recoloring the image. Instructions for the implementation can be stored on a non-transitory computer readable medium such that the embodiments can be implemented by one or more processors.
    Type: Application
    Filed: November 10, 2015
    Publication date: October 25, 2018
    Inventors: Pierre HELLIER, Neus SABATER, Patrick PEREZ
  • Publication number: 20180288307
    Abstract: A method and system is provided for refocusing images captured by a plenoptic camera. In one embodiment the plenoptic camera is in processing with an audio capture device. The method comprises the steps of determining direction of a dominant audio source associated with an image; creating an audio zoom by filtering out all other audio signals except those associated with said dominant audio source; and performing automatic refocusing of said image based on said created audio zoom.
    Type: Application
    Filed: September 28, 2016
    Publication date: October 4, 2018
    Applicant: THOMAS Licensing
    Inventors: Valérie ALLIE, Pierre HELLIER, Quang Khanh Ngoc DUONG, Patrick PEREZ
  • Publication number: 20180247418
    Abstract: A method for tracking an object commences by first establishing the object (12) in a current frame. Thereafter, a background region (202) is established encompassing the object in the current frame. The location for the object (12) is then estimated in a next frame. Next, the propagation of the background region (202) is determined. Finally, the object is segmented from its background based on propagation of the background region, thereby allowing tracking of the object from frame to frame.
    Type: Application
    Filed: May 26, 2015
    Publication date: August 30, 2018
    Inventors: Tomas Enrique CRIVELLI, Juan Manuel PEREZ RUA, Patrick PEREZ
  • Publication number: 20180239526
    Abstract: Various systems and methods for determining a set of gesture components of touch input are provided. Touch data can be obtained (301), and a number of gesture components to be generated can be selected (302). A set of gesture components can be generated (303) based on the touch data and the number. For example, a sparse matrix decomposition can be used to generate the set of gesture components. The set of gesture components can be stored (304) in a non-transitory computer-readable medium.
    Type: Application
    Filed: May 26, 2015
    Publication date: August 23, 2018
    Inventors: Kiran VARANASI, Patrick PEREZ
  • Patent number: D1026900
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: May 14, 2024
    Assignee: Apple Inc.
    Inventors: Jody Akana, Molly Anderson, Bartley K. Andre, Shota Aoyagi, Marine C. Bataille, Kevin Will Chen, Abidur Rahman Chowdhury, Andrew Patrick Clymer, Clara Geneviève Marine Courtaigne, Markus Diebel, Alexandre B. Girard, Jonathan Gomez Garcia, Aurelio Guzmán, M. Evans Hankey, Anne-Marie Heck, Moises Hernandez Hernandez, Richard P. Howarth, Julian Jaede, Duncan Robert Kerr, Kainoa Kwon-Perez, Nicolas Pedro Lylyk, Aaron Mathew Melim, Peter Russell-Clarke, Benjamin Andrew Shaffer, Clement Tissandier