Patents by Inventor Nicolas R. Tsingos

Nicolas R. Tsingos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10244343
    Abstract: Improved tools for authoring and rendering audio reproduction data are provided. Some such authoring tools allow audio reproduction data to be generalized for a wide variety of reproduction environments. Audio reproduction data may be authored by creating metadata for audio objects. The metadata may be created with reference to speaker zones. During the rendering process, the audio reproduction data may be reproduced according to the reproduction speaker layout of a particular reproduction environment.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: March 26, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. Tsingos, Charles Q. Robinson, Jurgen W. Scharpf
  • Publication number: 20190027157
    Abstract: An importance metric, based at least in part on an energy metric, may be determined for each of a plurality of received audio objects. Some methods may involve: determining a global importance metric for all of the audio objects, based, at least in part, on a total energy value calculated by summing the energy metric of each of the audio objects; determining an estimated quantization bit depth and a quantization error for each of the audio objects; calculating a total noise metric for all of the audio objects, the total noise metric being based, at least in part, on a total quantization error corresponding with the estimated quantization bit depth; calculating a total signal-to-noise ratio corresponding with the total noise metric and the total energy value; and determining a final quantization bit depth for each of the audio objects by applying a signal-to-noise ratio threshold to the total signal-to-noise ratio.
    Type: Application
    Filed: January 26, 2017
    Publication date: January 24, 2019
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. TSINGOS, Zachary Gideon COHEN, Vivek KUMAR
  • Patent number: 10165387
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: December 25, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne
  • Publication number: 20180367932
    Abstract: Audio signals are received. The audio signals include left and right surround channels. The audio signals are played back using far-field loudspeakers distributed around a space having a plurality of listener positions. The left and right surround channels are played back by a pair of far-field loudspeakers arranged at opposite sides of the space having the plurality of listener positions. An audio component coinciding with or approximating audio content common to the left and right surround channels is obtained. The audio component is played back using at least a pair of near-field transducers arranged at one of the listener positions. Associated systems, methods and computer program products are provided. Systems, methods and computer program products providing a bitstream comprising the audio signals and the audio component are also provided, as well as a computer-readable medium with data representing such audio content.
    Type: Application
    Filed: August 24, 2018
    Publication date: December 20, 2018
    Inventors: Remi AUDFRAY, Nicolas R. TSINGOS, Jurgen W. SCHARPF
  • Patent number: 10158958
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: December 18, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Christophe Chabanne, Nicolas R. Tsingos, Charles Q. Robinson
  • Publication number: 20180352366
    Abstract: The positions of a plurality of speakers at a media consumption site are determined. Audio information in an object-based format is received. Gain adjustment value for a sound content portion in the object-based format may be determined based on the position of the sound content portion and the positions of the plurality of speakers. Audio information in a ring-based channel format is received. Gain adjustment value for each ring-based channel in a set of ring-based channels may be determined based on the ring to which the ring-based channel belongs and the positions of the speakers at a media consumption site.
    Type: Application
    Filed: June 25, 2018
    Publication date: December 6, 2018
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Nicolas R. TSINGOS, David S. MCGRATH, Freddie SANCHEZ, Antonio MATEOS SOLE
  • Publication number: 20180324543
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Application
    Filed: July 13, 2018
    Publication date: November 8, 2018
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Charles Q. ROBINSON, Nicolas R. TSINGOS, Christophe CHABANNE
  • Publication number: 20180295464
    Abstract: Diffuse or spatially large audio objects may be identified for special processing. A decorrelation process may be performed on audio signals corresponding to the large audio objects to produce decorrelated large audio object audio signals. These decorrelated large audio object audio signals may be associated with object locations, which may be stationary or time-varying locations. For example, the decorrelated large audio object audio signals may be rendered to virtual or actual speaker locations. The output of such a rendering process may be input to a scene simplification process. The decorrelation, associating and/or scene simplification processes may be performed prior to a process of encoding the audio data.
    Type: Application
    Filed: June 14, 2018
    Publication date: October 11, 2018
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Dirk Jeroen BREEBAART, Lie LU, Nicolas R. TSINGOS, Antonio MATEOS SOLE
  • Publication number: 20180268829
    Abstract: Methods for generating an object based audio program which is renderable in a personalizable manner, e.g., to provide an immersive, perception of audio content of the program. Other embodiments include steps of delivering (e.g., broadcasting), decoding, and/or rendering such a program. Rendering of audio objects indicated by the program may provide an immersive experience. The audio content of the program may be indicative of multiple object channels (e.g., object channels indicative of user-selectable and user-configurable objects, and typically also a default set of objects which will be rendered in the absence of a selection by a user) and a bed of speaker channels. Another aspect is an audio processing unit (e.g., encoder or decoder) configured to perform, or which includes a buffer memory which stores at least one frame (or other segment) of an object based audio program (or bitstream thereof) generated in accordance with, any embodiment of the method.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Robert Andrew FRANCE, Thomas ZIEGLER, Sripal S. MEHTA, Andrew Jonathan DOWELL, Prinyar SAUNGSOMBOON, Michael David DWYER, Farhad FARAHANI, Nicolas R. Tsingos, Freddie SANCHEZ
  • Publication number: 20180270598
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Application
    Filed: October 19, 2016
    Publication date: September 20, 2018
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Christophe Chabanne, Nicolas R. Tsingos, Charles Q. Robinson
  • Patent number: 10063985
    Abstract: Audio signals (201) are received. The audio signals include left and right surround channels (206). The audio signals are played back using far-field loudspeakers (101-108, 401-406) distributed around a space (111, 409) having a plurality of listener positions (112, 410). The left and right surround channels are played back by a pair of far-field loudspeakers (103, 106, 403, 405) arranged at opposite sides of the space having the plurality of listener positions. An audio component (208) coinciding with or approximating audio content common to the left and right surround channels is obtained. The audio component is played back using at least a pair of near-field transducers (109, 110, 407, 408) arranged at one of the listener positions. Associated systems (100, 400), methods (800) and computer program products are provided.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: August 28, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Remi Audfray, Nicolas R. Tsingos, Jurgen W. Scharpf
  • Patent number: 10057708
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: August 21, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne
  • Publication number: 20180210695
    Abstract: Embodiments are described for a method of rendering audio for playback through headphones comprising receiving digital audio content, receiving binaural rendering metadata generated by an authoring tool processing the received digital audio content, receiving playback metadata generated by a playback device, and combining the binaural rendering metadata and playback metadata to optimize playback of the digital audio content through the headphones.
    Type: Application
    Filed: March 23, 2018
    Publication date: July 26, 2018
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Nicolas R. TSINGOS, Rhonda WILSON, Sunil BHARITKAR, C. Phillip BROWN, Alan J. SEEFELDT, Remi AUDFRAY
  • Patent number: 10034117
    Abstract: The positions of a plurality of speakers at a media consumption site are determined. Audio information in an object-based format is received. Gain adjustment value for a sound content portion in the object-based format may be determined based on the position of the sound content portion and the positions of the plurality of speakers. Audio information in a ring-based channel format is received. Gain adjustment value for each ring-based channel in a set of ring-based channels may be determined based on the ring to which the ring-based channel belongs and the positions of the speakers at a media consumption site.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: July 24, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Nicolas R. Tsingos, David S. McGrath, Freddie Sanchez, Antonio Mateos Sole
  • Publication number: 20180192230
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Application
    Filed: February 26, 2018
    Publication date: July 5, 2018
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Charles Q. ROBINSON, Nicolas R. TSINGOS, Christophe CHABANNE
  • Publication number: 20180192186
    Abstract: Input audio data, including first microphone audio signals and second microphone audio signals output by a pair of coincident, vertically-stacked directional microphones, may be received. An azimuthal angle corresponding to a sound source location may be determined, based at least in part on an intensity difference between the first microphone audio signals and the second microphone audio signals. An elevation angle corresponding to a sound source location may be determined, based at least in part on a temporal difference between the first microphone audio signals and the second microphone audio signals. Output audio data, including at least one audio object corresponding to a sound source, may be generated. The audio object may include audio object signals and associated audio object metadata. The audio object metadata may include at least audio object location data corresponding to the sound source location.
    Type: Application
    Filed: July 1, 2016
    Publication date: July 5, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventor: Nicolas R. TSINGOS
  • Patent number: 10003907
    Abstract: Diffuse or spatially large audio objects may be identified for special processing. A decorrelation process may be performed on audio signals corresponding to the large audio objects to produce decorrelated large audio object audio signals. These decorrelated large audio object audio signals may be associated with object locations, which may be stationary or time-varying locations. For example, the decorrelated large audio object audio signals may be rendered to virtual or actual speaker locations. The output of such a rendering process may be input to a scene simplification process. The decorrelation, associating and/or scene simplification processes may be performed prior to a process of encoding the audio data.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: June 19, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Dirk Jeroen Breebaart, Lie Lu, Nicolas R. Tsingos, Antonio Mateos Sole
  • Publication number: 20180167756
    Abstract: Multiple virtual source locations may be defined for a volume within which audio objects can move. A set-up process for rendering audio data may involve receiving reproduction speaker location data and pre-computing gain values for each of the virtual sources according to the reproduction speaker location data and each virtual source location. The gain values may be stored and used during “run time,” during which audio reproduction data are rendered for the speakers of the reproduction environment. During run time, for each audio object, contributions from virtual source locations within an area or volume defined by the audio object position data and the audio object size data may be computed. A set of gain values for each output channel of the reproduction environment may be computed based, at least in part, on the computed contributions. Each output channel may correspond to at least one reproduction speaker of the reproduction environment.
    Type: Application
    Filed: February 12, 2018
    Publication date: June 14, 2018
    Applicants: DOLBY LABORATORIES LICENSING CORPORATION, DOLBY INTERNATIONAL AB
    Inventors: Antonio MATEOS SOLE, Nicolas R. TSINGOS
  • Patent number: 9997164
    Abstract: Methods for generating an object based audio program which is renderable in a personalizable manner, e.g., to provide an immersive, perception of audio content of the program. Other embodiments include steps of delivering (e.g., broadcasting), decoding, and/or rendering such a program. Rendering of audio objects indicated by the program may provide an immersive experience. The audio content of the program may be indicative of multiple object channels (e.g., object channels indicative of user-selectable and user-configurable objects, and typically also a default set of objects which will be rendered in the absence of a selection by a user) and a bed of speaker channels. Another aspect is an audio processing unit (e.g., encoder or decoder) configured to perform, or which includes a buffer memory which stores at least one frame (or other segment) of an object based audio program (or bitstream thereof) generated in accordance with, any embodiment of the method.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: June 12, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Robert Andrew France, Thomas Ziegler, Sripal S. Mehta, Andrew Jonathan Dowell, Prinyar Saungsomboon, Michael David Dwyer, Farhad Farahani, Nicolas R. Tsingos, Freddie Sanchez
  • Patent number: 9992600
    Abstract: Multiple virtual source locations may be defined for a volume within which audio objects can move. A set-up process for rendering audio data may involve receiving reproduction speaker location data and pre-computing gain values for each of the virtual sources according to the reproduction speaker location data and each virtual source location. The gain values may be stored and used during “run time,” during which audio reproduction data are rendered for the speakers of the reproduction environment. During run time, for each audio object, contributions from virtual source locations within an area or volume defined by the audio object position data and the audio object size data may be computed. A set of gain values for each output channel of the reproduction environment may be computed based, at least in part, on the computed contributions. Each output channel may correspond to at least one reproduction speaker of the reproduction environment.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: June 5, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Antonio Mateos Sole, Nicolas R. Tsingos