Patents by Inventor Nicolas R. Tsingos

Nicolas R. Tsingos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160163321
    Abstract: Embodiments are directed to a method of representing spatial rendering metadata for processing in an object-based audio system that allows for lossless interpolation and/or re-sampling of the metadata. The method comprises time stamping the metadata to create metadata instances, and encoding an interpolation duration to with each metadata instance that specifies the time to reach a desired rendering state for the respective metadata instance. The re-sampling of metadata is useful for re-clocking metadata to an audio coder and for the editing audio content.
    Type: Application
    Filed: July 1, 2014
    Publication date: June 9, 2016
    Applicants: DOLBY INTERNATIONAL AB, Dolby Laboratories Licesing Corporation
    Inventors: Brian George ARNOTT, Dirk Jeroen BREEBAART, Antonio MATEOS SOLE, David S. McGRATH, Heiko PURNHAGEN, Freddie SANCHEZ, Nicolas R. TSINGOS
  • Publication number: 20160150343
    Abstract: Embodiments of the present invention relate to adaptive audio content generation. Specifically, a method for generating adaptive audio content is provided. The method comprises extracting at least one audio object from channel-based source audio content, and generating the adaptive audio content at least partially based on the at least one audio object. Corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: June 17, 2014
    Publication date: May 26, 2016
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Jun WANG, Lie LU, Mingqing HU, Dirk Jeroen BREEBAART, Nicolas R. TSINGOS
  • Publication number: 20160055854
    Abstract: A low-quality rendition of a complex soundtrack is created, synchronized and combined with the soundtrack. The low-quality rendition may be monitored in mastering operations, for example, to control the removal, replacement or addition of aural content in the soundtrack without the need for expensive equipment that would otherwise be required to render the soundtrack.
    Type: Application
    Filed: March 13, 2014
    Publication date: February 25, 2016
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Dossym NURMUKHANOV, Sripal S. MEHTA, Stanley G. COSSETTE, Nicolas R. TSINGOS
  • Publication number: 20160044433
    Abstract: In some embodiments, a method for rendering an audio program indicative of at least one source, including by panning the source along a trajectory comprising source locations using speakers organized as a mesh whose faces are convex N-gons, where N can vary from face to face, and N is not equal to three for at least one face of the mesh, including steps of: for each source location, determining an intersecting face of the mesh (including the source location's projection on the mesh), thereby determining a subset of the speakers whose positions coincide with the intersecting face's vertices, and determining gains (which may be determined by generalized barycentric coordinates) for speaker feeds for driving each speaker subset to emit sound perceived as emitting from the source location corresponding to the subset. Other aspects include systems configured (e.g., programmed) to perform any embodiment of the method.
    Type: Application
    Filed: March 19, 2014
    Publication date: February 11, 2016
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventor: Nicolas R. TSINGOS
  • Publication number: 20160037280
    Abstract: Improved tools for authoring and rendering audio reproduction data are provided. Some such authoring tools allow audio reproduction data to be generalized for a wide variety of reproduction environments. Audio reproduction data may be authored by creating metadata for audio objects. The metadata may be created with reference to speaker zones. During the rendering process, the audio reproduction data may be reproduced according to the reproduction speaker layout of a particular reproduction environment.
    Type: Application
    Filed: October 9, 2015
    Publication date: February 4, 2016
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Nicolas R. TSINGOS, Charles Q. ROBINSON, Jurgen W. SCHARPF
  • Publication number: 20160029138
    Abstract: Methods for generating an object based audio program which is renderable in a personalizable manner, e.g., to provide an immersive, perception of audio content of the program. Other embodiments include steps of delivering (e.g., broadcasting), decoding, and/or rendering such a program. Rendering of audio objects indicated by the program may provide an immersive experience. The audio content of the program may be indicative of multiple object channels (e.g., object channels indicative of user-selectable and user-configurable objects, and typically also a default set of objects which will be rendered in the absence of a selection by a user) and a bed of speaker channels. Another aspect is an audio processing unit (e.g., encoder or decoder) configured to perform, or which includes a buffer memory which stores at least one frame (or other segment) of an object based audio program (or bitstream thereof) generated in accordance with, any embodiment of the method.
    Type: Application
    Filed: March 19, 2014
    Publication date: January 28, 2016
    Applicants: Dolby Laboratories Licensing Corporation, DOLBY INTERNATIONAL AB
    Inventors: Robert Andrew FRANCE, Thomas ZIEGLER, Sripal S. MEHTA, Andrew Jonathan DOWELL, Prinyar SAUNGSOMBOON, Michael David DWYER, Farhad FARAHANI, Nicolas R. Tsingos, Freddie SANCHEZ
  • Publication number: 20160021476
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Application
    Filed: September 25, 2015
    Publication date: January 21, 2016
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Charles Q. ROBINSON, Nicolas R. TSINGOS, Christophe CHABANNE
  • Publication number: 20160007133
    Abstract: Multiple virtual source locations may be defined for a volume within which audio objects can move. A set-up process for rendering audio data may involve receiving reproduction speaker location data and pre-computing gain values for each of the virtual sources according to the reproduction speaker location data and each virtual source location. The gain values may be stored and used during “run time,” during which audio reproduction data are rendered for the speakers of the reproduction environment. During run time, for each audio object, contributions from virtual source locations within an area or volume defined by the audio object position data and the audio object size data may be computed. A set of gain values for each output channel of the reproduction environment may be computed based, at least in part, on the computed contributions. Each output channel may correspond to at least one reproduction speaker of the reproduction environment.
    Type: Application
    Filed: March 10, 2014
    Publication date: January 7, 2016
    Applicants: DOLBY INTERNATIONAL AB, DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Antonio MATEOS SOLE, Nicolas R. TSINGOS
  • Patent number: 9204236
    Abstract: Improved tools for authoring and rendering audio reproduction data are provided. Some such authoring tools allow audio reproduction data to be generalized for a wide variety of reproduction environments. Audio reproduction data may be authored by creating metadata for audio objects. The metadata may be created with reference to speaker zones. During the rendering process, the audio reproduction data may be reproduced according to the reproduction speaker layout of a particular reproduction environment.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: December 1, 2015
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. Tsingos, Charles Q. Robinson, Jurgen W. Scharpf
  • Publication number: 20150332680
    Abstract: Embodiments are directed a method of rendering object-based audio comprising determining an initial spatial position of objects having object audio data and associated metadata, determining a perceptual importance of the objects, and grouping the audio objects into a number of clusters based on the determined perceptual importance of the objects, such that a spatial error caused by moving an object from an initial spatial position to a second spatial position in a cluster is minimized for objects with a relatively high perceptual importance. The perceptual importance is based at least in part by a partial loudness of an object and content semantics of the object.
    Type: Application
    Filed: November 25, 2013
    Publication date: November 19, 2015
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Brett G. CROCKETT, Alan J. SEEFELDT, Nicolas R. TSINGOS, Rhonda WILSON, Dirk Jeroen BREEBAART, Lie LU, Lianwu CHEN
  • Patent number: 9179236
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: November 3, 2015
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne
  • Patent number: 9172901
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Grant
    Filed: March 20, 2012
    Date of Patent: October 27, 2015
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Christophe Chabanne, Nicolas R. Tsingos, Charles Q. Robinson
  • Patent number: 9118999
    Abstract: Methods and apparatus are described by which equalization and/or bass management of speakers in a sound reproduction system may be accomplished.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: August 25, 2015
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Mark F. Davis, Louise D. Fielder, Nicolas R. Tsingos, Charles Q. Robinson
  • Publication number: 20150235645
    Abstract: In some embodiments, a method (typically performed by a game console) for generating an object based audio program indicative of game audio content (audio content pertaining to play of or events in a game, and optionally also other information regarding the game), and including at least one audio object channel and at least one speaker channel. In other embodiments, a game console configured to generate such an object based audio program. Some embodiments implement object clustering in which audio content of input objects is mixed to generate at least one clustered audio object, or audio content of at least one input object is mixed with speaker channel audio. In response to the program, a spatial rendering system (e.g., external to the game console) may operate with knowledge of playback speaker configuration to generate speaker feeds indicative of a spatial mix of the program's speaker and object channel content.
    Type: Application
    Filed: August 6, 2013
    Publication date: August 20, 2015
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: S. Spencer Hooks, Nicolas R. Tsingos
  • Patent number: 9094771
    Abstract: In some embodiments, a method for upmixing input audio comprising N full range channels to generate 3D output audio comprising N+M full range channels, where the N+M full range channels are intended to be rendered by speakers including at least two speakers at different distances from the listener. The N channel input audio is a 2D audio program whose N full range channels are intended for rendering by N speakers nominally equidistant from the listener. The upmixing of the input audio to generate the 3D output audio is typically performed in an automated manner, in response to cues determined in automated fashion from stereoscopic 3D video corresponding to the input audio, or in response to cues determined in automated fashion from the input audio. Other aspects include a system configured to perform, and a computer readable medium which stores code for implementing any embodiment of the inventive method.
    Type: Grant
    Filed: April 5, 2012
    Date of Patent: July 28, 2015
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Nicolas R. Tsingos, Charles Q. Robinson, Christophe Chabanne, Toni Hirvonen, Patrick Griffis
  • Publication number: 20150146873
    Abstract: Embodiments are described for a method and system of rendering and playing back spatial audio content using a channel-based format. Spatial audio content that is played back through legacy channel-based equipment is transformed into the appropriate channel-based format resulting in the loss of certain positional information within the audio objects and positional metadata comprising the spatial audio content. To retain this information for use in spatial audio equipment even after the audio content is rendered as channel-based audio, certain metadata generated by the spatial audio processor is incorporated into the channel-based data. The channel-based audio can then be sent to a channel-based audio decoder or a spatial audio decoder. The spatial audio decoder processes the metadata to recover at least some positional information that was lost during the down-mix operation by upmixing the channel-based audio content back to the spatial audio content for optimal playback in a spatial audio environment.
    Type: Application
    Filed: June 17, 2013
    Publication date: May 28, 2015
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Christophe Chabanne, Brett Crockett, Spencer Hooks, Alan Seefeldt, Nicolas R. Tsingos, Mark Tuffy, Rhonda Wilson
  • Patent number: 8958567
    Abstract: In some embodiments, a method applying reverberation to audio from at least one client of a set of clients which share a virtual environment, including by asserting position data and at least one input audio stream to a server, selecting (in the server) a reverberation filter for each input audio stream in response to the position data and generating wet audio by applying to the input audio an early reverberation part of the selected reverberation filter. Typically, a client applies a late reverberation filter to the wet audio using metadata from the server. In other embodiments, a server selects a reverberation filter for application to audio in response to position data, asserts the audio and metadata indicative of the filter, and a client applies the filter to the audio using the metadata. Other aspects are systems, servers, and client devices configured to perform any embodiment of the method.
    Type: Grant
    Filed: June 5, 2012
    Date of Patent: February 17, 2015
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Nicolas R. Tsingos, Micah Taylor
  • Publication number: 20140240610
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column. In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Application
    Filed: May 7, 2014
    Publication date: August 28, 2014
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Christophe Chabanne, Nicolas R. Tsingos, Charles Q. Robinson
  • Patent number: 8755543
    Abstract: Audio perception in local proximity to visual cues is provided. A device includes a video display, first row of audio transducers, and second row of audio transducers. The first and second rows can be vertically disposed above and below the video display. An audio transducer of the first row and an audio transducer of the second row form a column to produce, in concert, an audible signal. The perceived emanation of the audible signal is from a plane of the video display (e.g., a location of a visual cue) by weighing outputs of the audio transducers of the column In certain embodiments, the audio transducers are spaced farther apart at a periphery for increased fidelity in a center portion of the plane and less fidelity at the periphery.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: June 17, 2014
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Christophe Chabanne, Charles Q. Robinson, Nicolas R. Tsingos
  • Publication number: 20140133683
    Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
    Type: Application
    Filed: June 27, 2012
    Publication date: May 15, 2014
    Applicant: Doly Laboratories Licensing Corporation
    Inventors: Charles Q. Robinson, Nicolas R. Tsingos, Christophe Chabanne