Patents by Inventor Quang Khanh Ngoc Duong

Quang Khanh Ngoc Duong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11735199
    Abstract: Method for modifying a style of an audio object, and corresponding electronic device, computer readable program products and computer readable storage medium The disclosure relates to a method for processing an input audio signal. According to an embodiment, the method includes obtaining a base audio signal being a copy of the input audio signal and generating an output audio signal from the base signal, the output audio signal having style features obtained by modifying the base signal so that a distance between base style features representative of a style of the base signal and a reference style feature decreases. The disclosure also relates to corresponding electronic device, computer readable program product and computer readable storage medium.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: August 22, 2023
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Quang Khanh Ngoc Duong, Alexey Ozerov, Eric Grinstein, Patrick Perez
  • Publication number: 20230186093
    Abstract: The present disclosure relates to a method including obtaining metadata upon training a first Deep Neural Network and embedding the obtained metadata in a signal. The present disclosure relates to a method including obtaining metadata related to a prior training of a first Deep Neural Network and adapting a model of a second Deep Neural Network using the obtained metadata. The present disclosure also relates to the corresponding devices, computer storage medium and signal.
    Type: Application
    Filed: May 5, 2021
    Publication date: June 15, 2023
    Applicant: InterDigital CE Patent Holdings
    Inventors: Quang Khanh Ngoc Duong, Thierry Filoche, Francoise Le Bolzer, Francois Schnitzler, Patrick Fontaine
  • Patent number: 11207592
    Abstract: A position and an orientation of a user in a virtual 3D scene is determined (22), an action is executed (24) in the virtual 3D scene for the user in function of the position and orientation of the user with respect to a given place, and a result of the action is outputted (25). For at least one event in the scene consisting in a presence of at least one determined virtual content, metadata linking the event(s) and at least one place of the event(s) are obtained (21). A given event and the given place linked by those metadata are determined (241), in function of the above position and orientation and of a relationship between that event and a user profile of the user. The action regarding the determined given event and place is executed.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: December 28, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Kiran Varanasi, Quang Khanh Ngoc Duong, Julien Fleureau, Philippe Robert
  • Patent number: 10964085
    Abstract: The present disclosure relates to methods, apparatus or systems for inciting a user consuming an immersive content to rotate the immersive rendering device in the direction of a region of interest. According to the present principles, an object representative of a character is inserted in the field of view of the user. The character is computed in a way it looks in the direction of the region of interest from its location in the immersive content. In addition, face and body attitude of the character may reflect an emotion that is associated with the region of interest, for example scare, happiness or interest. The user will naturally be incited at looking in the direction indicated by the inserted character.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: March 30, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Quang Khanh Ngoc Duong, Joel Sirot, Gwenaelle Marquant, Claire-Helene Demarty
  • Patent number: 10880466
    Abstract: A method and system is provided for refocusing images captured by a plenoptic camera. In one embodiment the plenoptic camera is in processing with an audio capture device. The method comprises the steps of determining direction of a dominant audio source associated with an image; creating an audio zoom by filtering out all other audio signals except those associated with said dominant audio source; and performing automatic refocusing of said image based on said created audio zoom.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: December 29, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Valérie Allie, Pierre Hellier, Quang Khanh Ngoc Duong, Patrick Perez
  • Publication number: 20200342650
    Abstract: The present disclosure relates to methods, apparatus or systems for inciting a user consuming an immersive content to rotate the immersive rendering device in the direction of a region of interest. According to the present principles, an object representative of a character is inserted in the field of view of the user. The character is computed in a way it looks in the direction of the region of interest from its location in the immersive content. In addition, face and body attitude of the character may reflect an emotion that is associated with the region of interest, for example scare, happiness or interest. The user will naturally be incited at looking in the direction indicated by the inserted character.
    Type: Application
    Filed: July 8, 2020
    Publication date: October 29, 2020
    Inventors: Quang Khanh Ngoc DUONG, Joel SIROT, Gwenaelle MARQUANT, Claire-Helene DEMARTY
  • Publication number: 20200286499
    Abstract: Method for modifying a style of an audio object, and corresponding electronic device, computer readable program products and computer readable storage medium The disclosure relates to a method for processing an input audio signal. According to an embodiment, the method includes obtaining a base audio signal being a copy of the input audio signal and generating an output audio signal from the base signal, the output audio signal having style features obtained by modifying the base signal so that a distance between base style features representative of a style of the base signal and a reference style feature decreases. The disclosure also relates to corresponding electronic device, computer readable program product and computer readable storage medium.
    Type: Application
    Filed: September 14, 2018
    Publication date: September 10, 2020
    Inventors: Quang Khanh Ngoc DUONG, Alexey OZEROV, Eric GRINSTEIN, Patrick PEREZ
  • Patent number: 10748321
    Abstract: The present disclosure relates to methods, apparatus or systems for inciting a user consuming an immersive content to rotate the immersive rendering device in the direction of a region of interest. According to the present principles, an object representative of a character is inserted in the field of view of the user. The character is computed in a way it looks in the direction of the region of interest from its location in the immersive content. In addition, face and body attitude of the character may reflect an emotion that is associated with the region of interest, for example scare, happiness or interest. The user will naturally be incited at looking in the direction indicated by the inserted character.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: August 18, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Quang Khanh Ngoc Duong, Joel Sirot, Gwenaelle Marquant, Claire-Helene Demarty
  • Patent number: 10674057
    Abstract: A plenoptic camera and associated method is provided. The camera has an array of sensors for generating digital images. The images have associated audio signals. The array of sensors are configured to capture digital images associated with a default spatial coordinate and are also configured to receive control input from a processor to change focus from said default spatial coordinate to a new spatial coordinate based on occurrence of an event at said new spatial coordinate.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: June 2, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Pierre Hellier, Quang Khanh Ngoc Duong, Valerie Allie, Philippe Leyendecker
  • Publication number: 20190344167
    Abstract: A position and an orientation of a user in a virtual 3D scene is determined (22), an action is executed (24)in the virtual 3D scene for the user in function of the position and orientation of the user with respect to a given place, and a result of the action is outputted (25). For at least one event in the scene consisting in a presence of at least one determined virtual content, metadata linking the event(s) and at least one place of the event(s) are obtained (21). A given event and the given place linked by those metadata are determined (241), in function of the above position and orientation and of a relationship between that event and a user profile of the user. The action regarding the determined given event and place is executed.
    Type: Application
    Filed: November 16, 2017
    Publication date: November 14, 2019
    Inventors: Kiran VARANASI, Quang Khanh Ngoc DUONG, Julien FLEUREAU, Philippe ROBERT
  • Publication number: 20190103005
    Abstract: A method and apparatus for recognizing an activity of a monitored individual in an environment are described including receiving a first acoustic signal, performing audio feature extraction on the first acoustic signal in a first temporal window, classifying the first acoustic signal by determining a location of the monitored individual in the environment based on the extracted features of the first acoustic signal in the first temporal window, receiving a second audio signal, performing audio feature extraction of the second acoustic signal in a second temporal window and classifying the second acoustic signal by determining an activity of the monitored individual in the location in the environment based on the extracted features of the second acoustic signal in the second temporal window.
    Type: Application
    Filed: March 23, 2017
    Publication date: April 4, 2019
    Inventors: Philippe Gilberton, Quang Khanh Ngoc Duong
  • Patent number: 10235126
    Abstract: A method and a system (20) of audio source separation are described. The method comprises: receiving (10) an audio mixture and at least one text query associated to the audio mixture; retrieving (11) at least one audio sample from an auxiliary audio database; evaluating (12) the retrieved audio samples; and separating (13) the audio mixture into a plurality of audio sources using the audio samples. The corresponding system (20) comprises a receiving (21) and a processor (22) configured to implement the method.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: March 19, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Quang Khanh Ngoc Duong, Alexey Ozerov, Dalia Elbadawy
  • Publication number: 20180358025
    Abstract: To represent and recover the constituent sources present in an audio mixture, informed source separation techniques are used. In particular, a universal spectral model (USM) is used to obtain a sparse time activation matrix for an individual audio source in the audio mixture. The indices of non-zero groups in the time activation matrix are encoded as the side information into a bitstream. The non-zero coefficients of the time activation matrix may also be encoded into the bitstream. At the decoder side, when the coefficients of the time activation matrix are included in the bitstream, the matrix can be decoded from the bitstream. Otherwise, the time activation matrix can be estimated from the audio mixture, the non-zero indices included in the bitstream, and the USM model. Given the time activation matrix, the constituent audio sources can be recovered based on the audio mixture and the USM model.
    Type: Application
    Filed: November 25, 2016
    Publication date: December 13, 2018
    Inventors: Quang Khanh Ngoc DUONG, Alexey OZEROV
  • Publication number: 20180350125
    Abstract: The present disclosure relates to methods, apparatus or systems for inciting a user consuming an immersive content to rotate the immersive rendering device in the direction of a region of interest. According to the present principles, an object representative of a character is inserted in the field of view of the user. The character is computed in a way it looks in the direction of the region of interest from its location in the immersive content. In addition, face and body attitude of the character may reflect an emotion that is associated with the region of interest, for example scare, happiness or interest. The user will naturally be incited at looking in the direction indicated by the inserted character.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 6, 2018
    Inventors: Quang Khanh Ngoc DUONG, Joel SIROT, Gwenaelle MARQUANT, Claire-Helene DEMARTY
  • Publication number: 20180308502
    Abstract: A method for processing an input signal having an audio component is described. The method includes obtaining a set of time parameters from a time frequency transformation of the audio component of the input signal, the audio component being a mixture of audio signals comprising at least one first audio signal of a first audio source; determining at least one motion feature of the first audio source from a visual sequence corresponding to the first audio signal; obtaining a weight vector of the set of time parameters based on the motion feature; and determining a time frequency transformation of the first audio signal based on the weight vector.
    Type: Application
    Filed: April 18, 2018
    Publication date: October 25, 2018
    Inventors: Sanjeel PAREKH, Alexey OZEROV, Quang Khanh Ngoc DUONG, Gael RICHARD, Slim ESSID, Patrick PEREZ
  • Publication number: 20180288307
    Abstract: A method and system is provided for refocusing images captured by a plenoptic camera. In one embodiment the plenoptic camera is in processing with an audio capture device. The method comprises the steps of determining direction of a dominant audio source associated with an image; creating an audio zoom by filtering out all other audio signals except those associated with said dominant audio source; and performing automatic refocusing of said image based on said created audio zoom.
    Type: Application
    Filed: September 28, 2016
    Publication date: October 4, 2018
    Applicant: THOMAS Licensing
    Inventors: Valérie ALLIE, Pierre HELLIER, Quang Khanh Ngoc DUONG, Patrick PEREZ
  • Patent number: 9990936
    Abstract: A method and an apparatus for separating speech data from background data in an audio communication are suggested. The method comprises: applying a speech model to the audio communication for separating the speech data from the background data of the audio communication; and updating the speech model as a function of the speech data and the background data during the audio communication.
    Type: Grant
    Filed: October 12, 2015
    Date of Patent: June 5, 2018
    Assignee: THOMSON Licensing
    Inventors: Alexey Ozerov, Quang Khanh Ngoc Duong, Louis Chevallier
  • Publication number: 20180115851
    Abstract: The present principles generally relate to audio apparatus, methods, and computer program products and in particular, to improvements that adjust the sound level or levels of one or more audio outputs of an audio system based on the determined origin and/or direction of propagation of a detected human voice in a location. Such an adjustment may be to decrease, mute, or increase the sound level of an audio output producing sound in the direction of the origin of the voice. A sound level produced by other audio outputs may be unchanged.
    Type: Application
    Filed: October 6, 2017
    Publication date: April 26, 2018
    Inventors: Quang Khanh Ngoc Duong, Brian Charles Eriksson, Philippe GILBERTON, Christophe Delaunay
  • Patent number: 9930466
    Abstract: A method and apparatus for processing audio content is described. The method and apparatus include receiving (510) audio content, the audio content including an input audio signal, a first reference audio signal, and a second reference audio signal, determining (550) a processing function for the input audio signal, the processing function determined based on a cost function between the input audio signal, the first reference audio signal and a second reference audio signal, and processing (560) the input audio signal using the determined processing function in order to produce an output audio signal.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: March 27, 2018
    Assignee: THOMSON Licensing
    Inventors: Alexey Ozerov, Marie Guegan, Quang Khanh Ngoc Duong
  • Publication number: 20180075863
    Abstract: A method is proposed for encoding at least two signals. The method includes mixing the at least two signals in a mixture; sampling a map Z representative of locations of the at least two signals in a time-frequency plane at sampling locations, the sampling delivering a first list of values Z?; and transmitting the mixture of the at least two signals and information representative of the first list of values Z?. The disclosure also relates to the corresponding method for separating signals in a mixture, and corresponding computer program products, devices and bitstream.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 15, 2018
    Inventors: Quang Khanh Ngoc DUONG, Gilles PUY, Alexey OZEROV, Patrick PEREZ