Abstract: A system quantifies listener envelopment in a loudspeakers-room environment. The system includes a binaural detector that receives frequency modulated audible noise signals from multiple loudspeakers. The binaural detector generates detected signals that are analyzed to determine an objective listener envelopment. The envelopment is based on binaural activity of one or more sub-bands of the detected signal.
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A user content audio signal is converted into sound that is delivered into an ear canal of a wearer of an in-ear speaker, while the in-ear speaker is sealing off the ear canal against ambient sound leakage. An acoustic or venting valve in the in-ear speaker is automatically signaled to open, so that sound inside the ear canal is allowed to travel out into an ambient environment through the valve, while activating conversion of an ambient content audio signal into sound for delivery into the ear canal. Both user content and ambient content are heard by the wearer. The ambient content audio signal is digitally processed so that certain frequency components have been gain adjusted, based on an equalization profile, so as to compensate for some of the insertion loss that is due to the in-ear speaker blocking the ear canal. Other embodiments are also described and claimed.
Abstract: The technology described in this document can be embodied in a computer-implemented method that includes receiving a first plurality of values representing a set of current coefficients of an adaptive filter disposed in an active noise cancellation system. The method also includes computing a second plurality of values each of which represents an instantaneous difference between a current coefficient and a corresponding preceding coefficient of the adaptive filter, and estimating, based on the second plurality of values, one or more instantaneous magnitudes of a transfer function that represents an effect of a secondary path of the active noise cancellation system. The method further includes updating the first plurality of values based on estimates of the one or more instantaneous magnitudes to generate a set of updated coefficients for the adaptive filter, and programming the adaptive filter with the set of updated coefficients.
Abstract: A directivity control apparatus controls a directivity of a sound collected by a first sound collecting unit including a plurality of microphones. The directivity control apparatus includes a directivity forming unit, configured to form a directivity of the sound in a direction toward a monitoring target corresponding to a first designated position in an image displayed on a display unit, and an information obtaining unit, configured to obtain information on a second designated position in the image displayed on the display unit, designated in accordance with a movement of the monitoring target. The directivity forming unit is configured to change the directivity of the sound toward the monitoring target corresponding to the second designated position by referring to the information on the second designated position obtained by the information obtaining unit.
Abstract: Methods are provided for equalizing the group delay of a sound reproduction system, in particular a system comprising acoustic transducers with at least one crossover between a lower-frequency and a higher-frequency range. A correction is applied to a signal in the lower-frequency range, including the crossover region, to substantially equalize the group delay for the lower-frequency range, and a signal delay is applied to a signal in the higher-frequency range to bring it into closer alignment with the equalized lower-frequency range signal. The methods may be implemented in the design of an acoustic transducer system and also via a computer program product, which can be implemented as an update or enhancement to an existing digital signal processor loudspeaker system.
Type:
Grant
Filed:
January 7, 2014
Date of Patent:
September 12, 2017
Assignee:
Meridian Audio Limited
Inventors:
Michael D. Capp, John Robert Stuart, Alan S. J. Wood, Richard J. Hollinshead
Abstract: Embodiments of the present invention relate to adaptive audio content generation. Specifically, a method for generating adaptive audio content is provided. The method comprises extracting at least one audio object from channel-based source audio content, and generating the adaptive audio content at least partially based on the at least one audio object. Corresponding system and computer program product are also disclosed.
Abstract: According to an embodiment, a control filter coefficient is calculated in such a manner that a second spatial average of one or more complex sound pressure ratios at one or more target binaural positions when a first loudspeaker and a second loudspeaker emit a second acoustic signal and a first acoustic signal is approximated to a first spatial average of one or more complex sound pressure ratios at the one or more target binaural positions when a target virtual acoustic source emits the first acoustic signal.
Abstract: A particular method includes determining, based on an inter-line spectral pair (LSP) spacing corresponding to an audio signal, that the audio signal includes a component corresponding to an artifact-generating condition. The method also includes, in response to determining that the audio signal includes the component, adjusting a gain parameter corresponding to the audio signal. For example, the gain parameter may be adjusted via gain attenuation and/or gain smoothing.
Abstract: A parameter representation of a multi-channel signal having several original channels includes a parameter set, which, when used together with at least one down-mix channel allows a multi-channel reconstruction. An additional level parameter is calculated such that an energy of the at least one downmix channel weighted by the level parameter is equal to a sum of energies of the original channels. The additional level parameter is transmitted to a multi-channel reconstructor together with the parameter set or together with a down-mix channel. An apparatus for generating a multi-channel representation uses the level parameter to correct the energy of the at least one transmitted down-mix channel before entering the down-mix signal into an up-mixer or within the up-mixing process.
Type:
Grant
Filed:
January 16, 2014
Date of Patent:
August 22, 2017
Assignee:
DOLBY INTERNATIONAL AB
Inventors:
Heiko Purnhagen, Lars Villemoes, Jonas Engdegard, Jonas Roeden, Kristofer Kjoerling
Abstract: The encoding and decoding of HOA signals using Singular Value Decomposition includes forming (11) based on sound source direction values and an Ambisonics order corresponding ket vectors (|Y(?5))) of spherical harmonics and an encoder mode matrix (?0?s). From the audio input signal (|?(?s))) a singular threshold value (??) determined. On the encoder mode matrix a Singular Value Decomposition (13) is carried out in order to get related singular values which are compared with the threshold value, leading to a final encoder mode matrix rank (rfine). Based on direction values (?l) of loudspeakers and a decoder Ambisonics order (Nl ), corresponding ket vectors (IY(?l )) and a decoder mode matrix (?0?L) are formed (18). On the decoder mode matrix a Singular Value Decomposition (19) is carried out, providing a final decoder mode matrix rank (r find).
Abstract: A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs.
Type:
Grant
Filed:
November 12, 2012
Date of Patent:
August 15, 2017
Assignee:
QUALCOMM Incorporated
Inventors:
Lae-Hoon Kim, Jongwon Shin, Erik Visser
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: In general, techniques are described for compensating for error in decomposed representations of sound fields. In accordance with the techniques, a device comprising one or more processors may be configured to quantize one or more first vectors representative of one or more components of a sound field, and compensate for error introduced due to the quantization of the one or more first vectors in one or more second vectors that are also representative of the same one or more components of the sound field.
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: The present technology substantially reduces undesirable effects of multi-level noise suppression processing by applying an adaptive signal equalization. A noise suppression system may apply different levels of noise suppression based on the (user-perceived) signal-to-noise-ratio (SNR) or based on an estimated echo return loss (ERL). The resulting high-frequency data attenuation may be counteracted by adapting the signal equalization. The present technology may be applied in both transmit and receive paths of communication devices. Intelligibility may particularly be improved under varying noise conditions, e.g., when a mobile device user is moving in and out of noisy environments.
Abstract: The invention relates to a method for rendering a stereo audio signal over a first loudspeaker and a second loudspeaker with respect to a desired direction, the stereo audio signal comprising a first audio signal component (L) and a second audio signal component (R), the method comprising: providing a first rendering signal based on a combination of L and a first difference signal obtained based on a difference between L and R to the first loudspeaker, and providing a second rendering signal based on a combination of R and a second difference signal obtained based on the difference between L and R to the second loudspeaker, such that both difference signals are different with respect to sign and one difference signal is delayed by a delay compared to the other difference signal to define a dipole signal, wherein the delay is adapted according to the desired direction.
Type:
Grant
Filed:
August 6, 2015
Date of Patent:
July 4, 2017
Assignee:
Huawei Technologies Co., Ltd.
Inventors:
Christof Faller, David Virette, Yue Lang
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A system comprising audio processing circuitry is provided. The audio processing circuitry is operable to receive combined-game-and-chat audio signals generated from a mixing together of a chat audio signal and game audio signals. The audio processing circuitry is operable to process the combined-game-and-chat audio signals to detect strength of a chat component of the combined-game-and-chat audio signals and strength of a game component of the combined-game-and-chat audio signals. The audio processing circuitry is operable to automatically control a volume setting based on one or both of: the detected strength of the chat component, and the detected strength of the game component. The combined-game-and-chat audio signals may comprise a left channel signal and a right channel signal. The processing of the combined-game-and-chat audio signals may comprise measuring strength of a vocal-band signal component that is common to the left channel signal and the right channel signal.
Type:
Grant
Filed:
July 24, 2014
Date of Patent:
June 20, 2017
Assignee:
Voyetra Turtle Beach, Inc.
Inventors:
Richard Kulavik, Shobha Devi Kuruba Buchannagari, Carmine Bonanno
Abstract: According to one aspect, an electronic device for detecting an audio accessory. The electronic device includes an audio jack having at least two detection terminals. The detection terminals are spaced apart and positioned within a socket of the audio jack so when an audio plug of the accessory is inserted into the socket of the audio jack, the detection terminals will be shorted. The presence of a short between the detection terminals is indicative that the audio accessory is present.
Type:
Grant
Filed:
February 26, 2013
Date of Patent:
June 13, 2017
Assignee:
BLACKBERRY LIMITED
Inventors:
Jens Kristian Poulsen, Yong Zhang, Per Magnus Fredrik Hansson
Abstract: Disclosed are methods for selecting auditory signal components for reproduction by means of one or more supplementary sound reproducing transducers, such as loudspeakers, placed between a pair of primary sound reproducing transducers, such as left and right loudspeakers in a stereophonic loudspeaker setup or adjacent loudspeakers in a surround sound loudspeaker setup. Also disclosed are devices for carrying out the above methods and systems of such devices.
Type:
Grant
Filed:
September 28, 2010
Date of Patent:
June 6, 2017
Assignee:
Harman Becker Automotive Systems Manufacturing Kft
Inventors:
Patrick James Hegarty, Jan Abildgaard Pedersen
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A text reading and vocalizing device having an elongate, handheld, manipulable body positional proximal a line of text, whereby movement of the body along the line of text positions a light scanner, distally disposed upon a second body part, to optically recognize text for audible indication of the text sounded by the body or relayed through a pair of headphones interconnected at a headphone jack, wherein text is readable and playable to a user, the text further translatable into an associated language when one of a plurality of language selection buttons, disposed upon the body, is depressed.
Abstract: A method performed in an audio decoder for decoding M encoded audio channels representing N audio channels is disclosed. The method includes receiving a bitstream containing the M encoded audio channels and a set of spatial parameters, decoding the M encoded audio channels, and extracting the set of spatial parameters from the bitstream. The method also includes analyzing the M audio channels to detect a location of a transient, decorrelating the M audio channels, and deriving N audio channels from the M audio channels and the set of spatial parameters. A first decorrelation technique is applied to a first subset of each audio channel and a second decorrelation technique is applied to a second subset of each audio channel. The first decorrelation technique represents a first mode of operation of a decorrelator, and the second decorrelation technique represents a second mode of operation of the decorrelator.
Abstract: A method and system for distribution of 3D sound in a vehicle comprising two speakers arranged spaced apart in close vicinity of a head of a vehicle operator. The vehicle operator can look in multiple directions, and the system for distribution of 3D sound comprises means to determine at least one of angle and gaze direction of the head of the vehicle operator. Furthermore, the distribution of the 3D sound is based on the determined angle or gaze direction of the head of the vehicle operator.
Abstract: A spatial audio processor for providing spatial parameters based on an acoustic input signal has a signal characteristics determiner and a controllable parameter estimator. The signal characteristics determiner is configured to determine a signal characteristic of the acoustic input signal. The controllable parameter estimator for calculating the spatial parameters for the acoustic input signal in accordance with a variable spatial parameter calculation rule is configured to modify the variable spatial parameter calculation rule in accordance with the determined signal characteristic.
Type:
Grant
Filed:
September 27, 2012
Date of Patent:
April 18, 2017
Assignee:
Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
Inventors:
Oliver Thiergart, Fabian Kuech, Richard Schultz-Amling, Markus Kallinger, Giovanni Del Galdo, Achim Kuntz, Dirk Mahne, Ville Pulkki, Mikko-Ville Laitinen
Abstract: The disclosed subject matter relates to an architecture that can facilitate generation of enhanced spatial effects for stereo audio systems. Such can be accomplished by integrating on top ambience signal boosting employed in conventional systems. In particular, the ambience signal can be transformed according to a time-dependent function, which can simulate the auditory impressions of a real-world listening environment that may contain static, regularly moving, and/or irregular moving elements.
Abstract: A directivity control apparatus controls a directivity of a sound collected by a first sound collecting unit including a plurality of microphones. The directivity control apparatus includes a directivity forming unit, configured to form a directivity of the sound in a direction toward a monitoring target corresponding to a first designated position in an image displayed on a display unit, and an information obtaining unit, configured to obtain information on a second designated position in the image displayed on the display unit, designated in accordance with a movement of the monitoring target. The directivity forming unit is configured to change the directivity of the sound toward the monitoring target corresponding to the second designated position by referring to the information on the second designated position obtained by the information obtaining unit.
Abstract: Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent.
Abstract: A method performed by an audio decoder for reconstructing N audio channels from an audio signal containing M audio channels is disclosed. The method includes receiving a bitstream containing an encoded audio signal having M audio channels and a set of spatial parameters, the set of spatial parameters including an inter-channel intensity difference parameter and an inter-channel coherence parameter. The encoded audio bitstream is then decoded to obtain a decoded frequency domain representation of the M audio channels, and at least a portion of the frequency domain representation is decorrelated with an all-pass filter having a fractional delay. The all-pass filter is attenuated at locations of a transient. A matrixed version of the decorrelated signals are summed with a matrixed version of the decoded frequency domain representation to obtain N audio signals that collectively having N audio channels where M is less than N.
Type:
Grant
Filed:
March 24, 2016
Date of Patent:
April 11, 2017
Assignee:
Dolby International AB
Inventors:
Heiko Purnhagen, Lars Villemoes, Jonas Engdegard, Jonas Roeden, Kristofer Kjoerling
Abstract: Embodiments are directed to an interconnect for coupling components in an object-based rendering system comprising: a first network channel coupling a renderer to an array of individually addressable drivers projecting sound in a listening environment and transmitting audio signals and control data from the renderer to the array, and a second network channel coupling a microphone placed in the listening environment to a calibration component of the renderer and transmitting calibration control signals for acoustic information generated by the microphone to the calibration component. The interconnect is suitable for use in a system for rendering spatial audio content comprising channel-based and object-based audio components.
Abstract: A speech/music discrimination method evaluates the standard deviation between envelope peaks, loudness ratio, and smoothed energy difference. The envelope is searched for peaks above a threshold. The standard deviations of the separations between peaks are calculated. Decreased standard deviation is indicative of speech, higher standard deviation is indicative of non-speech. The ratio between minimum and maximum loudness in recent input signal data frames is calculated. If this ratio corresponds to the dynamic range characteristic of speech, it is another indication that the input signal is speech content. Smoothed energies of the frames from the left and right input channels are computed and compared. Similar (e.g., highly correlated) left and right channel smoothed energies is indicative of speech. Dissimilar (e.g., un-correlated content) left and right channel smoothed energies is indicative of non-speech material. The results of the three tests are compared to make a speech/music decision.
Abstract: An audio signal processing apparatus configured to process a plurality of audio signals of a shared-volume stereo system is provided. The audio signal processing apparatus includes a plurality of signal processing channels. The signal processing channels are configured to filter the audio signals to obtain high-frequency components and low-frequency components. The signal processing channels perform a signal processing operation on the low-frequency components to generate a low-frequency modulation signal. The signal processing channels sum up the high-frequency components and the low-frequency modulation signal to reproduce a plurality of audio reproduction signals. The signal processing channels drive loudspeakers of the shared-volume stereo system according to the audio reproduction signals. Furthermore, an audio signal processing method is also provided.
Type:
Grant
Filed:
March 22, 2016
Date of Patent:
April 4, 2017
Assignee:
Merry Electronics(Shenzhen) Co., Ltd.
Inventors:
Diego Jose Hernandez Garcia, Yu-Hsuan Lin, Wen-Hong Wang
Abstract: A multi-rate audio processing system and method provides real-time measurement and processing of amplitude/phase changes in the transition band of the lowest frequency subband caused by the audio processing that can be used to apply amplitude/phase compensation to the higher subband(s). Tone signals may be injected into the transition band to provide strong tonal content for measurement and processing. The real-time measurement and compensation adapts to time-varying amplitude/phase changes regardless of the source of the change (e.g. non-linear time-varying linear or user control parameters) and provides universal applicability for any linear audio processing.
Abstract: A sound reproduction system that is capable of providing at least two separate sound zones in one coherent listening room. In each sound zone, resulting acoustic signals substantially corresponds to a respective audio source signal associated with the same sound zone, and the contribution of audio source signals associated with a different sound zone to the resulting sound signal is minimized.
Abstract: A method of determining acoustical characteristics of a room or venue using a microphone unit having four omni-directional microphones placed at ends of a tetrahedro-mounting unit which are equidistant to a middle point of a mounting unit. The four microphones detect impulse responses for each of n sound sources. The detected impulse responses are analyzed: (1) by determining a direct-sound-component direction, delay, and frequency response; (2) by determining an early-reflection direction, delay, and frequency response of each m early reflection; and (3) in view of late-reverberation components by determining a delay and frequency responses. Direct-sound-transmission-function filter parameters are calculated based on the determined direct-sound-component direction, delay, and frequency response M early-reflection-transmission-function filters parameters are calculated based on the m determined directions, delays, and frequency responses of the m early-reflection components.
Abstract: According to various embodiments, a method for outputting a modified audio signal may be provided. The method may include: receiving from a user an input indicating an angle; determining a parameter for a head-related transfer function based on the received input indicating the angle; modifying an audio signal in accordance with the head-related transfer function based on the determined parameter; and outputting the modified audio signal.
Abstract: In some embodiments, a method for automatic detection of polarity of speakers, e.g., speakers installed in cinema environments. In some embodiments, the method determines relative polarities of a set of speakers (e.g., loudspeakers and/or drivers of a multi-driver loudspeaker) using a set of microphones, including by measuring impulse responses, including an impulse response for each speaker-microphone pair; clustering the speakers into a set of groups, each group including at least two of the speakers which are similar to each other in at least one respect; and for each group, determining and analyzing cross-correlations of pairs of impulse responses (e.g., pairs of processed versions of impulse responses) of speakers in the group to determine relative polarities of the speakers. Other aspects include systems configured (e.g., programmed) to perform any embodiment of the inventive method, and computer readable media (e.g., discs) which store code for implementing any embodiment of the inventive method.
Type:
Grant
Filed:
January 17, 2014
Date of Patent:
January 31, 2017
Assignees:
Dolby Laboratories Licensing Corporation, Dolby International AB
Inventors:
Mark F. Davis, Louis Fielder, Antonio Mateos Sole, Giulio Cengarle, Sunil Bharitkar
Abstract: An audio system has a first channel for receiving a first input signal and driving a first speaker and a second channel for receiving a second input signal and driving a second speaker. A first feedforward circuit couples an input of the second channel circuit to an input of the first channel circuit. A second feedforward circuit couples an input of the first channel circuit to an input of the second channel circuit. Circuit parameters of the first and the second feedforward circuits are determined such that a first detected output signal is zero when the first input signal is non-zero and the second input signal is zero, and a second detected output signal is zero when the second input signal is non-zero and the first input signal is zero. The audio system is configured to operate using the determined circuit parameters for the first and the second feedforward circuits.
Abstract: Embodiments are described for a system of rendering spatial audio content in a listening environment. The system includes a rendering component configured to generate a plurality of audio channels including information specifying a playback location in a listening area, an upmixer component receiving the plurality of audio channels and generating, for each audio channel, at least one reflected sub-channel configured to cause a majority of driver energy to reflect off of one or more surfaces of the listening area, and at least one direct sub-channel configured to cause a majority of driver energy to propagate directly to the playback location.
Abstract: A digital speaker system that effectively uses the characteristic of a signal processing circuit is constructed. A digital speaker system 1 having plural speakers has a signal processing circuit 10 for outputting plural right-sound digital signals and plural left-sound digital signals, and at least one of the plural speakers is operated to function as a monaural sound speaker 2 to which any one or plural right-sound digital signals output from the signal processing circuit 10 and any left-sound digital signals whose number corresponds to the number of the signals are input to output monaural sounds.
Abstract: A method and device are disclosed for determining an inter-channel time difference of a multi-channel audio signal having at least two channels. A determination is made at a number of consecutive time instances, inter-channel correlation based on a cross-correlation function involving at least two different channels of the multi-channel audio signal. Each value of the inter-channel correlation is associated with a corresponding value of the inter-channel time difference. An adaptive inter-channel correlation threshold is adaptively determined based on adaptive smoothing of the inter-channel correlation in time. A current value of the inter-channel correlation is then evaluated in relation to the adaptive inter-channel correlation threshold to determine whether the corresponding current value of the inter-channel time difference is relevant. Based on the result of this evaluation, an updated value of the inter-channel time difference is determined.
Abstract: In general, techniques are described for grouping audio objects into clusters. In some examples, a device for audio signal processing comprises a cluster analysis module configured to, based on a plurality of audio objects, produce a first grouping of the plurality of audio objects into L clusters, wherein the first grouping is based on spatial information from at least N among the plurality of audio objects and L is less than N. The device also includes an error calculator configured to calculate an error of the first grouping relative to the plurality of audio objects, wherein the error calculator is further configured to, based on the calculated error, produce a plurality L of audio streams according to a second grouping of the plurality of audio objects into L clusters that is different from the first grouping.
Type:
Grant
Filed:
July 18, 2013
Date of Patent:
December 6, 2016
Assignee:
QUALCOMM Incorporated
Inventors:
Pei Xiang, Dipanjan Sen, Kerry Titus Hartman
Abstract: An audio reproducing apparatus includes: an obtainment unit configured to obtain a stereo audio signal including an L channel signal and an R channel signal; and a control unit configured to (i) generate a first audio signal for a speaker disposed at an upper position in a listening space and a second audio signal for a speaker disposed at a lower position in the listening space, using the L channel signal and the R channel signal and (ii) determine a gain coefficient corresponding to a degree of correlation between the L channel signal and the R channel signal and multiply by the determined gain coefficient at least one of the first audio signal and the second audio signal, to approximate a ratio between energy of sound reproduced from the first audio signal and energy of sound reproduced from the second audio signal to a predetermined value.
Abstract: An audio system including a first speaker and a second speaker that are arranged in front of a predetermined listening position to be substantially bilaterally symmetrical with respect to the listening position; a third speaker and a fourth speaker that are arranged in front of the predetermined listening position to be substantially bilaterally symmetrical with respect to the listening position; a first attenuator that attenuates components that are equal to or less than a predetermined first frequency of an input audio signal; and an output controller that outputs sounds that are based on the input audio signal from the first speaker and the second speaker and outputs sounds that are based on the first audio signal in which components that are equal to or less than the first frequency of the input audio signal from the third speaker and the fourth speaker.
Abstract: An audio processing system (100) comprises a front-end component (102, 103), which receives quantized spectral components and performs an inverse quantization, yielding a time-domain representation of an intermediate signal. The audio processing system further comprises a frequency-domain processing stage (104, 105, 106, 107, 108), configured to provide a time-domain representation of a processed audio signal, and a sample rate converter (109), providing a reconstructed audio signal sampled at a target sampling frequency. The respective internal sampling rates of the time-domain representation of the intermediate audio signal and of the time-domain representation of the processed audio signal are equal. In particular embodiments, the processing stage comprises a parametric upmix stage which is operable in at least two different modes and is associated with a delay stage that ensures constant total delay.
Type:
Grant
Filed:
April 4, 2014
Date of Patent:
October 25, 2016
Assignee:
Dolby International AB
Inventors:
Kristofer Kjoerling, Heiko Purnhagen, Lars Villemoes
Abstract: Up-sampler generates an up-sampled sound signal from the sound signal. From the up-sampled sound signal, odd-ordered high-harmonic generator generates an odd-ordered high-harmonic, and even-ordered high-harmonic generator generates an even-ordered high-harmonic. Vowel sound detector identifies whether or not the sound signal is vowel sound, and generates a first gain value and a second gain value. First gain controller amplifies or attenuates the odd-ordered high-harmonic based on the first gain value, and outputs the resultant odd-ordered high-harmonic. Second gain controller amplifies or attenuates the even-ordered high-harmonic based on the second gain value, and outputs the resultant even-ordered high-harmonic. Sound signal processing device adds the gain-adjusted odd-ordered high-harmonic and the gain-adjusted even-ordered high-harmonic to the up-sampled sound signal, and outputs the up-sampled sound signal having the high-harmonics added.