Patents by Inventor Nicolas R. Tsingos

Nicolas R. Tsingos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180115849
    Abstract: Described herein is a method (30) of rendering an audio signal (17) for playback in an audio environment (27) defined by a target loudspeaker system (23), the audio signal (17) including audio data relating to an audio object and associated position data indicative of an object position. Method (30) includes the initial step (31) of receiving the audio signal (17). At step (32) loudspeaker layout data for the target loudspeaker system (23) is received. At step (33) control data is received that is indicative of a position modification to be applied to the audio object in the audio environment (27). At step (38) in response to the position data, loudspeaker layout data and control data, rendering modification data is generated. Finally, at step (39) the audio signal (17) is rendered with the rendering modification data to output the audio signal (17) with the audio object at a modified object position that is between loudspeakers within the audio environment (27).
    Type: Application
    Filed: April 20, 2016
    Publication date: April 26, 2018
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Dirk Jeroen BREEBAART, Antonio MATEOS SOLE, Heiko PURNHAGEN, Nicolas R. TSINGOS
  • Publication number: 20180109895
    Abstract: Audio signals (201) are received. The audio signals include left and right surround channels (206). The audio signals are played back using far-field loudspeakers (101-108, 401-406) distributed around a space (111, 409) having a plurality of listener positions (112, 410). The left and right surround channels are played back by a pair of far-field loudspeakers (103, 106, 403, 405) arranged at opposite sides of the space having the plurality of listener positions. An audio component (208) coinciding with or approximating audio content common to the left and right surround channels is obtained. The audio component is played back using at least a pair of near-field transducers (109, 110, 407, 408) arranged at one of the listener positions. Associated systems (100, 400), methods (800) and computer program products are provided.
    Type: Application
    Filed: May 12, 2016
    Publication date: April 19, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Remi AUDFRAY, Nicolas R. TSINGOS, Jurgen W. SCHARPF
  • Publication number: 20180109894
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Application
    Filed: October 19, 2016
    Publication date: April 19, 2018
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Christophe Chabanne, Nicolas R. Tsingos, Charles Q. Robinson
  • Patent number: 9942688
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Grant
    Filed: August 9, 2017
    Date of Patent: April 10, 2018
    Assignee: Dolby Laboraties Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne
  • Patent number: 9933989
    Abstract: Embodiments are described for a method of rendering audio for playback through headphones comprising receiving digital audio content, receiving binaural rendering metadata generated by an authoring tool processing the received digital audio content, receiving playback metadata generated by a playback device, and combining the binaural rendering metadata and playback metadata to optimize playback of the digital audio content through the headphones.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: April 3, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. Tsingos, Rhonda Wilson, Sunil Bharitkar, C. Phillip Brown, Alan J. Seefeldt, Remi Audfray
  • Publication number: 20180077515
    Abstract: Improved tools for authoring and rendering audio reproduction data are provided. Some such authoring tools allow audio reproduction data to be generalized for a wide variety of reproduction environments. Audio reproduction data may be authored by creating metadata for audio objects. The metadata may be created with reference to speaker zones. During the rendering process, the audio reproduction data may be reproduced according to the reproduction speaker layout of a particular reproduction environment.
    Type: Application
    Filed: November 3, 2017
    Publication date: March 15, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. TSINGOS, Charles Q. ROBINSON, Jurgen W. SCHARPF
  • Publication number: 20180027352
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Application
    Filed: August 9, 2017
    Publication date: January 25, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. ROBINSON, Nicolas R. TSINGOS, Christophe CHABANNE
  • Publication number: 20180005643
    Abstract: In some embodiments, a method, apparatus and computer program for reducing noise from an audio signal captured by a drone (e.g., canceling the noise signature of a drone from the audio signal) using a model of noise emitted by the drone's propulsion system set, where the propulsion system set includes one or more propulsion systems, each of the propulsion systems including an electric motor, and wherein the noise reduction is performed in response to voltage data indicative of instantaneous voltage supplied to each electric motor of the propulsion system set. In some other embodiments, a method, apparatus and computer program for generating a noise model by determining the noise signature of at least one drone based upon a database of noise signals corresponding to at least one propulsion system and canceling the noise signature of the drone in an audio signal based upon the noise model.
    Type: Application
    Filed: January 20, 2016
    Publication date: January 4, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventor: Nicolas R. TSINGOS
  • Patent number: 9858932
    Abstract: Embodiments are directed to a method of representing spatial rendering metadata for processing in an object-based audio system that allows for lossless interpolation and/or re-sampling of the metadata. The method comprises time stamping the metadata to create metadata instances, and encoding an interpolation duration to with each metadata instance that specifies the time to reach a desired rendering state for the respective metadata instance. The re-sampling of metadata is useful for re-clocking metadata to an audio coder and for the editing audio content.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: January 2, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Brian George Arnott, Dirk Jeroen Breebaart, Antonio Mateos Sole, David S. McGrath, Heiko Purnhagen, Freddie Sanchez, Nicolas R. Tsingos
  • Patent number: 9838826
    Abstract: Improved tools for authoring and rendering audio reproduction data are provided. Some such authoring tools allow audio reproduction data to be generalized for a wide variety of reproduction environments. Audio reproduction data may be authored by creating metadata for audio objects. The metadata may be created with reference to speaker zones. During the rendering process, the audio reproduction data may be reproduced according to the reproduction speaker layout of a particular reproduction environment.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: December 5, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. Tsingos, Charles Q. Robinson, Jurgen W. Scharpf
  • Publication number: 20170339506
    Abstract: Example embodiments disclosed herein relate to audio object clustering. A method for metadata-preserved audio object clustering is disclosed. The method comprises classifying a plurality of audio objects into a number of categories based on information to be preserved in metadata associated with the plurality of audio objects. The method further comprises assigning a predetermined number of clusters to the categories and allocating an audio object in each of the categories to at least one of the clusters according to the assigning. Corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: December 10, 2015
    Publication date: November 23, 2017
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Lianwu CHEN, Lie LU, Nicolas R. TSINGOS
  • Patent number: 9813837
    Abstract: In some embodiments, methods for generating an object based audio program including screen-elated metadata indicative of at least one warping degree parameter for at least one audio object, or generating a speaker channel-based program including by warping audio content of an object based audio program to a degree determined at least in part by at least one warping degree parameter, or methods for decoding or rendering any such audio program. Other aspects are systems configured to perform such audio signal generation, decoding, or rendering, and audio processing units (e.g., decoders or encoders) including a buffer memory which stores at least one segment of any such audio program.
    Type: Grant
    Filed: November 11, 2014
    Date of Patent: November 7, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Freddie Sanchez
  • Patent number: 9805725
    Abstract: Embodiments are directed a method of rendering object-based audio comprising determining an initial spatial position of objects having object audio data and associated metadata, determining a perceptual importance of the objects, and grouping the audio objects into a number of clusters based on the determined perceptual importance of the objects, such that a spatial error caused by moving an object from an initial spatial position to a second spatial position in a cluster is minimized for objects with a relatively high perceptual importance. The perceptual importance is based at least in part by a partial loudness of an object and content semantics of the object.
    Type: Grant
    Filed: November 25, 2013
    Date of Patent: October 31, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Brett G. Crockett, Alan J. Seefeldt, Nicolas R. Tsingos, Rhonda Wilson, Dirk Jeroen Breebaart, Lie Lu, Lianwu Chen
  • Patent number: 9800991
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: October 24, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne
  • Patent number: 9786286
    Abstract: A low-quality rendition of a complex soundtrack is created, synchronized and combined with the soundtrack. The low-quality rendition may be monitored in mastering operations, for example, to control the removal, replacement or addition of aural content in the soundtrack without the need for expensive equipment that would otherwise be required to render the soundtrack.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: October 10, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Dossym Nurmukhanov, Sripal S. Mehta, Stanley G. Cossette, Nicolas R. Tsingos
  • Publication number: 20170289724
    Abstract: During a process, decorrelation may be selectively applied to audio data for an audio object based, at least in part, on whether a speaker for which speaker feed signals will be determined is a surround speaker. In some implementations, decorrelation may be selectively applied according to whether such a speaker is a height speaker. Some implementations may reduce, or even eliminate, audio artifacts such as comb-filter notches and peaks. Some such implementations may increase the size of a “sweet spot” of a reproduction environment.
    Type: Application
    Filed: September 10, 2015
    Publication date: October 5, 2017
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Dirk Jeroen BREEBAART, Antonio Mateos SOLE, Heiko PURNHAGEN, Nicolas R. TSINGOS
  • Patent number: 9756444
    Abstract: In some embodiments, a method for rendering an audio program indicative of at least one source, including by panning the source along a trajectory comprising source locations using speakers organized as a mesh whose faces are convex N-gons, where N can vary from face to face, and N is not equal to three for at least one face of the mesh, including steps of: for each source location, determining an intersecting face of the mesh (including the source location's projection on the mesh), thereby determining a subset of the speakers whose positions coincide with the intersecting face's vertices, and determining gains (which may be determined by generalized barycentric coordinates) for speaker feeds for driving each speaker subset to emit sound perceived as emitting from the source location corresponding to the subset. Other aspects include systems configured (e.g., programmed) to perform any embodiment of the method.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: September 5, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventor: Nicolas R. Tsingos
  • Patent number: 9756445
    Abstract: Embodiments of the present invention relate to adaptive audio content generation. Specifically, a method for generating adaptive audio content is provided. The method comprises extracting at least one audio object from channel-based source audio content, and generating the adaptive audio content at least partially based on the at least one audio object. Corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: September 5, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Jun Wang, Lie Lu, Mingqing Hu, Dirk Jeroen Breebaart, Nicolas R. Tsingos
  • Patent number: 9747909
    Abstract: Embodiments are directed to a method for processing an input audio signal, comprising: splitting the input audio signal into at least two components, in which the first component is characterized by fast fluctuations in the input signal envelope, and a second component that is relatively stationary over time; processing the second, stationary component by a decorrelation circuit; and constructing an output signal by combining the output of the decorrelator circuit with the input signal and/or the first component signal.
    Type: Grant
    Filed: July 23, 2014
    Date of Patent: August 29, 2017
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Dirk Jeroen Breebaart, Lie Lu, Antonio Mateos Sole, Nicolas R. Tsingos
  • Publication number: 20170238116
    Abstract: Multiple virtual source locations may be defined for a volume within which audio objects can move. A set-up process for rendering audio data may involve receiving reproduction speaker location data and pre-computing gain values for each of the virtual sources according to the reproduction speaker location data and each virtual source location. The gain values may be stored and used during “run time,” during which audio reproduction data are rendered for the speakers of the reproduction environment. During run time, for each audio object, contributions from virtual source locations within an area or volume defined by the audio object position data and the audio object size data may be computed. A set of gain values for each output channel of the reproduction environment may be computed based, at least in part, on the computed contributions. Each output channel may correspond to at least one reproduction speaker of the reproduction environment.
    Type: Application
    Filed: May 3, 2017
    Publication date: August 17, 2017
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Antonio MATEOS SOLE, Nicolas R. TSINGOS