Audio User Interaction Recognition and Context Refinement
A system which performs social interaction analysis for a plurality of participants includes a processor. The processor is configured to determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs. The processor is configured to determine the social interaction between the participants based on the similarities between the first spatially filtered output and each of the second spatially filtered outputs and display an output that is representative of the social interaction between the participants. The first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.
Latest QUALCOMM Incorporated Patents:
- Low latency schemes for peer-to-peer (P2P) communications
- Independent Tx and Rx timing-based processing for positioning and sensing
- Conditional use of allocated periodic resources
- Acquiring location information of an assisting transmission and reception point (TRP)
- Usage of transformed map data with limited third party knowledge
This application claims priority under the benefit of 35 U.S.C. §119(e) to Provisional Patent Application No. 61/645,818, filed May 11, 2012. This provisional patent application is hereby expressly incorporated by reference herein in its entirety.
BACKGROUNDA substantial amount of useful information can be derived from determining the direction a user is looking at different points in time, and this information can be used to enhance the user's interaction with a variety of computational systems. Therefore, it is not surprising that a vast amount of gaze tracking research using a vision based approach (i.e., tracking the eyes using any of several various means) has already been undertaken. However, understanding a user's gazing direction only gives semantic information on one dimension of the user's interest and does not take into account contextual information that is mostly given by speech. In other words, the combination of gaze tracking coupled with speech tracking would provide richer and more meaningful information in a variety of different user applications.
SUMMARYContextual information (that is, non-visual information that is being sent or received by a user) is determined using an audio based approach. Audio user interaction on the receiving side may be enhanced by steering audio beams toward a specific person or a specific sound source. The techniques described herein may therefore allow a user to more clearly understand the context of a conversation, for example. To achieve these benefits, inputs from one or more steerable microphone arrays and inputs from a fixed microphone array may be used to determine who a person is looking at or what a person is paying attention to relative to who is speaking where audio-based contextual information (or even visual-based semantic information) is being presented.
For various implementations, two different types of microphone array devices (MADs) are used. The first type of MAD is a steerable microphone array (also referred to herein as a steerable array) which is worn by a user in a known orientation with regard to the user's eyes, and multiple users may each wear a steerable array. The second type of MAD is a fixed-location microphone array (also referred to herein as a fixed array) which is placed in the same acoustic space as the users (one or more of which are using steerable arrays).
For certain implementations, the steerable microphone array may be part of an active noise control (ANC) headset or hearing aid. There may be multiple steerable arrays, each associated with a different user or speaker (also referred to herein as a participant) in a meeting or group, for example. The fixed microphone array, in such a context, would then be used to separate different people speaking and listening during the group meeting using audio beams corresponding to the direction in which the different people are located relative to the fixed array.
The correlation or similarity between the audio beams of the separated speakers of fixed array and the outputs of the steerable arrays are evaluated. Correlation is one example of a similarity measure, although any of several similarity measurement or determination techniques may be used.
In an implementation, the similarity measure between the audio beams of the separated participants of the fixed array and the outputs of steerable arrays may be used to track social interaction between participants, including gazing direction of the participants over time as different participants speak or present audio-based information.
In an implementation, the similarity measure between the audio beams of the separated participants of the fixed array and the outputs of steerable arrays may be used to zoom in on a targeted participant, for example. This zooming might in turn lead to enhanced noise filtering and amplification when one user (who at that moment is a listener) is gazing at another person who is providing audio-based information (i.e., speaking).
In an implementation, the similarity measure between the audio beams of the separated participants of the fixed array and the outputs of steerable arrays may be used to adaptively form a better beam for a targeted participant, in effect better determining the physical orientation of each of the users relative to each other.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the embodiments, there are shown in the drawings example constructions of the embodiments; however, the embodiments are not limited to the specific methods and instrumentalities disclosed. In the drawings:
Unless expressly limited by its context, the term “signal” is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term “generating” is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, evaluating, estimating, and/or selecting from a plurality of values. Unless expressly limited by its context, the term “obtaining” is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Unless expressly limited by its context, the term “selecting” is used to indicate any of its ordinary meanings, such as identifying, indicating, applying, and/or using at least one, and fewer than all, of a set of two or more. Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “derived from” (e.g., “B is a precursor of A”), (ii) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (iii) “equal to” (e.g., “A is equal to B” or “A is the same as B”). Similarly, the term “in response to” is used to indicate any of its ordinary meanings, including “in response to at least.”
References to a “location” of a microphone of a multi-microphone audio sensing device indicate the location of the center of an acoustically sensitive face of the microphone, unless otherwise indicated by the context. The term “channel” is used at times to indicate a signal path and at other times to indicate a signal carried by such a path, according to the particular context. Unless otherwise indicated, the term “series” is used to indicate a sequence of two or more items. The term “logarithm” is used to indicate the base-ten logarithm, although extensions of such an operation to other bases are within the scope of this disclosure. The term “frequency component” is used to indicate one among a set of frequencies or frequency bands of a signal, such as a sample (or “bin”) of a frequency domain representation of the signal (e.g., as produced by a fast Fourier transform) or a subband of the signal (e.g., a Bark scale or mel scale subband).
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The term “configuration” may be used in reference to a method, apparatus, and/or system as indicated by its particular context. The terms “method,” “process,” “procedure,” and “technique” are used generically and interchangeably unless otherwise indicated by the particular context. The terms “apparatus” and “device” are also used generically and interchangeably unless otherwise indicated by the particular context. The terms “element” and “module” are typically used to indicate a portion of a greater configuration. Unless expressly limited by its context, the term “system” is used herein to indicate any of its ordinary meanings, including “a group of elements that interact to serve a common purpose.”
Any incorporation by reference of a portion of a document shall also be understood to incorporate definitions of terms or variables that are referenced within the portion, where such definitions appear elsewhere in the document, as well as any figures referenced in the incorporated portion. Unless initially introduced by a definite article, an ordinal term (e.g., “first,” “second,” “third,” etc.) used to modify a claim element does not by itself indicate any priority or order of the claim element with respect to another, but rather merely distinguishes the claim element from another claim element having a same name (but for use of the ordinal term). Unless expressly limited by its context, each of the terms “plurality” and “set” is used herein to indicate an integer quantity that is greater than one.
A combination visual- and hearing-based approach is described herein to enable a user to steer towards a person (or a sound source) in order to more clearly understand the audio-based information being presented at that moment (e.g., the context of conversation and/or the identity of the sound source) using sound sensors and a variety of position-based calculations and resulting interaction enhancements.
For example, the correlation or similarity between the audio beams of the separated speakers of the fixed array and the outputs of steerable arrays may be used to track social interaction between speakers. Correlation is just one example of a similarity measure, and any similarity measurement or determination technique may be used.
More particularly, a social interaction or social networking analysis of a group of users (also referred to herein as speakers or participants) may be performed and displayed using a connection graph generated responsive to the correlation or other similarity measure between the audio beams of the separated speakers of the fixed array and the output of each steerable array respectively associated with each user of the group. Thus, for example, automatic social network analysis may be performed in a group meeting of participants, using a connection graph among the meeting participants, to derive useful information regarding who was actively engaged in the presentation or more generally the effectiveness of the presentation in holding the attention of the users.
A user 105 wearing the headset may generate a fixed beam-pattern 120 from his steerable (e.g., wearable) microphone array which is pointed in the user's physical visual (or “look”) direction. If the user turns his head, then the user's look direction of the beam-pattern is also changed. The active speaker's location may be determined using the fixed microphone array. By correlating, or otherwise determining the similarity of, beamformed output (or any type of spatially filtered output) from the steerable microphone array with the fixed microphone array outputs corresponding to each active speaker, the identification may be determined of the person that a user is looking at (e.g., paying attention to, listening to, etc.). Each headset may be have processor that is in communication (e.g., via a wireless communications link) with a main processor (e.g., in a centralized local or remote computing device) to analyze correlations or similarities of beams between the headsets and/or the fixed arrays.
In other words, fixed beam patterns at any moment in time may be formed based on a user's physical look direction which can be correlated with the fixed microphone array outputs, thereby providing a visual indication, via a connection graph 130 (e.g., displayed on a display of any type of computing device, such as a handset, a laptop, a tablet, a computer, a netbook, or a mobile computing device), of the social interaction of the targeted users. Thus, by correlating a beamformed output from the steerable microphone array with the fixed microphone array outputs, corresponding to each active speaking user, tracking of a social interaction or network analysis may be performed and displayed. Moreover, by checking the similarity between beamformed output from the look-direction-steerable microphone array and the location-fixed microphone array outputs corresponding to each active speaker, the person that a user is looking at or paying attention to can be identified and zoomed into.
A fixed microphone array (such as in a smartphone) with an associated processor performs a direction of arrival (DOA) estimation at 320 in three dimensions (3D) around the fixed microphone array and separates the active speakers at 325. The number of active speakers is determined at 370, and a separate output for each active speaker (identified by an identification number for example) is generated at 380. In an implementation, speaker recognition and labeling of the active speakers may be performed at 330.
The similarity is measured between the separated speakers of the fixed array and the outputs of the steerable arrays at 340. Using the measured similarity and the DOA estimation and the speaker IDs, a visualization of the user interaction (with speaker identity (ID) or participant ID) may be generated and displayed at 350. Each user's look direction may be provided to the fixed array as a smartphone coordinate for example, at 360.
A connection graph (also referred to as an interaction graph) may be generated which displays (a) who is talking and/or listening to whom and/or looking at whom, (b) who is dominating and/or leading the discussion of the group, and/or (c) who is bored, not participating, and/or quiet, for example. Real-time meeting analysis may be performed to assist the efficiency of the meeting and future meetings. Information such as time of meeting, place (e.g., meeting location), speaker identity or participant identity, meeting topic or subject matter, and number of participants, for example, may be displayed and used in the analysis.
Additional data may be displayed on the display 418, such as the meeting time 426, the meeting location 428, the length of the meeting 430 (i.e., the duration), the meeting topic 432, and the number of meeting participants 434. Some or all of this data may be displayed. Additionally or alternatively, other data may be displayed, depending on the implementation, such as the IDs of all the participants and other statistics that may be generated as described further herein. The information and data that is generated for display on the display 418 may be stored in a memory and retrieved and displayed at a later time, as well as being displayed in real-time.
It is noted that a participant will be participating even if she is just listening at the meeting (and not speaking) because that participant's microphone (steerable microphone array) will still be picking up the sounds in the direction she is viewing while she is listening. Thus, even if a participant does not speak, there will still be sounds to analyze that are associated with her listening.
A user interface may be generated and displayed (e.g., on a smartphone display or other computing device display such as a display associated with a handset, a laptop, a tablet, a computer, a netbook, or a mobile computing device) that indicates the various user interactions during the meeting.
In the example of
Social interaction plots may be accumulated over a time period (e.g., over a month, a year, etc.) to assess group dynamics or topic dynamics, for example.
Thus, for example, Jane has a 20% participation rate in meetings about “Design”, a 40% participation rate in meetings about “Code Walkthrough”, and a 10% participation rate in meetings about “Documentation”. This data may be used to determine which participants are most suited for, or interested in, a particular topic, for example, or which participants may need more encouragement with respect to a particular topic. Participation rates may be determined and based on one or more data items described herein, such as amount of time speaking at the meeting, amount of time paying attention at the meeting, amount of time listening at the meeting, etc. Although percentages are shown in
An “L” in the diagram 460 is used as an example indicator to indicate which user participated most in a certain topic, thereby indicating a potential leader for that topic for example. Any indicator may be used, such as a color, highlighting, or a particular symbol. In this example, John is the most participating in Design, Jane is the most participating in Code Walkthrough, and Mary is the most participating in Documentation. Accordingly, they may be identified as potential leaders in the respective topics.
Additionally, a personal time line with an interaction history may be generated for one or more meeting participants. Thus, not only a single snapshot or period of time during a meeting may be captured, analyzed, and information pertaining to it displayed (either in real-time or later offline), but also history over time may be stored (e.g., in a memory of a computing device such as a smartphone or any type of computing device, such as a handset, a laptop, a tablet, a computer, a netbook, or a mobile computing device), analyzed, and displayed (e.g., in a calendar or other display of a computing device such as a smartphone any type of computing device, such as a handset, a laptop, a tablet, a computer, a netbook, or a mobile computing device).
The information displayed in
Interaction statistics may also be generated, stored, analyzed, and displayed. For example, the evolution of interaction between people can be tracked and displayed. Recursive weighting over time may be used (e.g., 0.9*historical data+0.1*current data), such that as data gets older, it becomes less relevant, with the most current data being weighted the highest (or vice versa). In this manner, a user may be able to see which people he or others are networking with more than others. Additional statistics may be factored into the analysis to provide more accurate interaction information. For example, interaction information obtained from email exchanges or other communication may be used (combined with) the meeting, history, and/or participant interaction data to provide additional (e.g., more accurate) interaction information.
As another example, online learning monitoring may be performed to determine whether a student in a remote site is actively participating or not. Likewise, an application for video games with participant interaction is also contemplated in which there may be immediate recognition of where the users are looking among the possible sound event locations.
Location mapping may be generated using this information, at 515. Information pertaining to when a user turns to someone and looks at them may be leveraged. A well known classic correlation equation, such as that shown at 506, may be used as shown, where E is equal to the expectation value and c is the correlation value. Whenever there is a maximum peak, that is the angle of strong correlation. In an implementation, the maximum allowable time shift may be predetermined using a physical constraint or system complexity. For example, the time delay between steerable microphones and fixed microphones can be measured and used, when only the user, who wears the steerable array, is active. Note that the conventional frame length 20 ms corresponds to almost 7 meters. The angle θ is the relative angle at which the active speaker is located relative to the listening user. The angle θ may be determined between the fixed array and the steerable array, at 513.
Location mapping may be generated using this information, at 525. Information pertaining to when a user turns to someone and looks at them may be leveraged. A well known classic cumulant equation, shown at 526, may be used as shown, where E is equal to the expectation value and c is the correlation value. Whenever there is a maximum peak, that is the angle of strong correlation. The angle θ is the relative angle at which the active speaker is located relative to the listening user. The angle θ may be determined between the fixed array and the steerable array, at 513.
It is noted that any similarity or correlation technique may be used. Regarding a possible similarity measure, virtually any distance metric(s) may be used such as, but not limited to the well known techniques of: (1) least square fit with allowable time adjustment: time-domain or frequency-domain; (2) feature based approach: using linear prediction coding (LPC) or mel-frequency cepstral coefficients (MFCC); and (3) higher order based approach: cross-cumulant, empirical Kullback-Leibler Divergence, or Itakura-Saito distance.
In an implementation, the correlation or similarity between the audio beams of the separated speakers of the fixed microphone array and the outputs of the steerable microphone arrays may be used to zoom into a targeted speaker. This type of collaborative zooming may provide a user interface for zooming into a desired speaker.
In other words, collaborative zooming may be performed wherein a user interface is provided for multiple users with multiple devices for zooming into a target speaker by just looking at the target speaker. Beamforming may, be produced at the targeted person via either the headsets or handsets such that all available resources of multiple devices can be combined for collaborative zooming, thereby enhancing the look direction of the targeted person.
For example, a user may look at a target person, and beamforming may be produced at the targeted person by either using the headset or a handset (whichever is closer to the target person). This may be achieved by using a device that includes a hidden camera with two microphones. When multiple users of multiple devices look at the target person, the camera(s) can visually focus on the person. In addition, the device(s) can audibly focus (i.e., zoom in on) the person by using (e.g., all) available microphones to enhance the look direction of the target person.
Additionally, the target person can be audibly zoomed in on by nulling out other speakers and enhancing the target person's voice. The enhancement can also be done using a headset or handset, whichever is closer to the target person.
An exemplary user interface display 600 is shown in
In an implementation, speaker recognition and labeling of the active speakers may be performed at 730. At 750, a correlation or similarity is determined between the separated speakers of the fixed array and the outputs of the steerable arrays. Using the correlation or similarity measurement and the speakers' IDs, a target user can be detected, localized, and zoomed into, at 760.
The user can be replaced with a device, such as a hidden camera with two microphones, and just by looking at the targeted person, the targeted person can be focused on with zooming by audition as well as by vision.
A camcorder application with multiple devices is contemplated. The look direction is known, and all available microphones of other devices may be used to enhance the look direction source.
In an implementation, the correlation or similarity between the audio beams of the separated speakers of the fixed array and the outputs of steerable arrays may be used to adaptively form a better beam for a targeted speaker. In this manner, the fixed microphones beamformer may be adaptively refined, such that new look directions can be adaptively generated by a fixed beamformer.
For example, the headset microphone array's beamformer output can be used as a reference to refine the look direction of fixed microphone array's beamformer. The correlation or similarity between the headset beamformer output and the current fixed microphone array beamformer output may be compared with the correlation or similarity between the headset beamformer output and the fixed microphone array beamformer outputs with slightly moved look directions.
Continuing with
It is a challenge to provide a method for estimating a three-dimensional direction of arrival (DOA) for each frame of an audio signal for concurrent multiple sound events that is sufficiently robust under background noise and reverberation. Robustness can be obtained by maximizing the number of reliable frequency bins. It may be desirable for such a method to be suitable for arbitrarily shaped microphone array geometry, such that specific constraints on microphone geometry may be avoided. A pair-wise 1-D approach as described herein can be appropriately incorporated into any geometry.
A solution may be implemented for such a generic speakerphone application or far-field application. Such an approach may be implemented to operate without a microphone placement constraint. Such an approach may also be implemented to track sources using available frequency bins up to Nyquist frequency and down to a lower frequency (e.g., by supporting use of a microphone pair having a larger inter-microphone distance). Rather than being limited to a single pair for tracking, such an approach may be implemented to select a best pair among all available pairs. Such an approach may be used to support source tracking even in a far-field scenario, up to a distance of three to five meters or more, and to provide a much higher DOA resolution. Other potential features include obtaining an exact 2-D representation of an active source. For best results, it may be desirable that each source is a sparse broadband audio source, and that each frequency bin is mostly dominated by no more than one source.
For a signal received by a pair of microphones directly from a point source in a particular DOA, the phase delay differs for each frequency component and also depends on the spacing between the microphones. The observed value of the phase delay at a particular frequency bin may be calculated as the inverse tangent of the ratio of the imaginary term of the complex FFT coefficient to the real term of the complex FFT coefficient. As shown in
where d denotes the distance between the microphones (in m), θ denotes the angle of arrival (in radians) relative to a direction that is orthogonal to the array axis, f denotes frequency (in Hz), and c denotes the speed of sound (in m/s). For the ideal case of a single point source with no reverberation, the ratio of phase delay to frequency Δφ/f will have the same value
over all frequencies.
Such an approach is limited in practice by the spatial aliasing frequency for the microphone pair, which may be defined as the frequency at which the wavelength of the signal is twice the distance d between the microphones. Spatial aliasing causes phase wrapping, which puts an upper limit on the range of frequencies that may be used to provide reliable phase delay measurements for a particular microphone pair.
Instead of phase unwrapping, a proposed approach compares the phase delay as measured (e.g., wrapped) with pre-calculated values of wrapped phase delay for each of an inventory of DOA candidates.
of the squared differences between the observed and candidate phase delay values over a desired range or other set F of frequency components. The phase delay values Δφi
It may be desirable to calculate the error ei across as many frequency bins as possible to increase robustness against noise. For example, it may be desirable for the error calculation to include terms from frequency bins that are beyond the spatial aliasing frequency. In a practical application, the maximum frequency bin may be limited by other factors, which may include available memory, computational complexity, strong reflection by a rigid body at high frequencies, etc.
A speech signal is typically sparse in the time-frequency domain. If the sources are disjoint in the frequency domain, then two sources can be tracked at the same time. If the sources are disjoint in the time domain, then two sources can be tracked at the same frequency. It may be desirable for the array to include a number of microphones that is at least equal to the number of different source directions to be distinguished at any one time. The microphones may be omnidirectional (e.g., as may be typical for a cellular telephone or a dedicated conferencing device) or directional (e.g., as may be typical for a device such as a set-top box).
Such multichannel processing is generally applicable, for example, to source tracking for speakerphone applications. Such a technique may be used to calculate a DOA estimate for a frame of the received multichannel signal. Such an approach may calculate, at each frequency bin, the error for each candidate angle with respect to the observed angle, which is indicated by the phase delay. The target angle at that frequency bin is the candidate having the minimum error. In one example, the error is then summed across the frequency bins to obtain a measure of likelihood for the candidate. In another example, one or more of the most frequently occurring target DOA candidates across all frequency bins is identified as the DOA estimate (or estimates) for a given frame.
Such a method may be applied to obtain instantaneous tracking results (e.g., with a delay of less than one frame). The delay is dependent on the FFT size and the degree of overlap. For example, for a 512-point FFT with a 50% overlap and a sampling frequency of 16 kHz, the resulting 256-sample delay corresponds to sixteen milliseconds. Such a method may be used to support differentiation of source directions typically up to a source-array distance of two to three meters, or even up to five meters.
The error may also be considered as a variance (i.e., the degree to which the individual errors deviate from an expected value). Conversion of the time-domain received signal into the frequency domain (e.g., by applying an FFT) has the effect of averaging the spectrum in each bin. This averaging is even more obvious if a subband representation is used (e.g., mel scale or Bark scale). Additionally, it may be desirable to perform time-domain smoothing on the DOA estimates (e.g., by applying as recursive smoother, such as a first-order infinite-impulse-response filter).
It may be desirable to reduce the computational complexity of the error calculation operation (e.g., by using a search strategy, such as a binary tree, and/or applying known information, such as DOA candidate selections from one or more previous frames).
Even though the directional information may be measured in terms of phase delay, it is typically desired to obtain a result that indicates source DOA. Consequently, it may be desirable to calculate the error in terms of DOA rather than in terms of phase delay.
An expression of error ei in terms of DOA may be derived by assuming that an expression for the observed wrapped phase delay as a function of DOA, such as
is equivalent to a corresponding expression for unwrapped phase delay as a function of DOA, such as
except near discontinuities that are due to phase wrapping. The error ei may then be expressed as
ei=∥Ψf
where the difference between the observed and candidate phase delay at frequency f is expressed in terms of DOA as
Perform a Taylor series expansion to obtain the following first-order approximation:
which is used to obtain an expression of the difference between the DOA θob
This expression may be used, with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express error ei in terms of DOA:
where the values of [ψf
To avoid division with zero at the endfire directions (θ=+/−90°), it may be desirable to perform such an expansion using a second-order approximation instead, as in the following:
As in the first-order example above, this expression may be used, with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express error ei in terms of DOA as a function of the observed and candidate wrapped phase delay values.
As shown in
As shown in
For expression (1), an extremely good match at a particular frequency may cause a corresponding likelihood to dominate all others. To reduce this susceptibility, it may be desirable to include a regularization term A, as in the following expression:
Speech tends to be sparse in both time and frequency, such that a sum over a set of frequencies F may include results from bins that are dominated by noise. It may be desirable to include a bias term β, as in the following expression:
The bias term, which may vary over frequency and/or time, may be based on an assumed distribution of the noise (e.g., Gaussian). Additionally or alternatively, the bias term may be based on an initial estimate of the noise (e.g., from a noise-only initial frame). Additionally or alternatively, the bias term may be updated dynamically based on information from noise-only frames, as indicated, for example, by a voice activity detection module.
The frequency-specific likelihood results may be projected onto a (frame, angle) plane to obtain a DOA estimation per frame θest
The likelihood results may also be projected onto a (frame, frequency) plane to indicate likelihood information per frequency bin, based on directional membership (e.g., for voice activity detection). This likelihood may be used to indicate likelihood of speech activity. Additionally or alternatively, such information may be used, for example, to support time- and/or frequency-selective masking of the received signal by classifying frames and/or frequency components according to their direction of arrival.
An anglogram representation is similar to a spectrogram representation. An anglogram may be obtained by plotting, at each frame, a likelihood of the current DOA candidate at each frequency
A microphone pair having a large spacing is typically not suitable for high frequencies, because spatial aliasing begins at a low frequency for such a pair. A DOA estimation approach as described herein, however, allows the use of phase delay measurements beyond the frequency at which phase wrapping begins, and even up to the Nyquist frequency (i.e., half of the sampling rate). By relaxing the spatial aliasing constraint, such an approach enables the use of microphone pairs having larger inter-microphone spacings. As an array with a large inter-microphone distance typically provides better directivity at low frequencies than an array with a small inter-microphone distance, use of a larger array typically extends the range of useful phase delay measurements into lower frequencies as well.
The DOA estimation principles described herein may be extended to multiple microphone pairs in a linear array (e.g., as shown in
For a far-field source, the multiple microphone pairs of a linear array will have essentially the same DOA. Accordingly, one option is to estimate the DOA as an average of the DOA estimates from two or more pairs in the array. However, an averaging scheme may be affected by mismatch of even a single one of the pairs, which may reduce DOA estimation accuracy. Alternatively, it may be desirable to select, from among two or more pairs of microphones of the array, the best microphone pair for each frequency (e.g., the pair that gives the minimum error ei at that frequency), such that different microphone pairs may be selected for different frequency bands. At the spatial aliasing frequency of a microphone pair, the error will be large. Consequently, such an approach will tend to automatically avoid a microphone pair when the frequency is close to its wrapping frequency, thus avoiding the related uncertainty in the DOA estimate. For higher-frequency bins, a pair having a shorter distance between the microphones will typically provide a better estimate and may be automatically favored, while for lower-frequency bins, a pair having a larger distance between the microphones will typically provide a better estimate and may be automatically favored. In the four-microphone example shown in
In one example, the best pair for each axis is selected by calculating, for each frequency f, Pxl values, where P is the number of pairs, I is the size of the inventory, and each value epi is the squared absolute difference between the observed angle θpf (for pair p and frequency f) and the candidate angle θif. For each frequency f, the pair p that corresponds to the lowest error value epi is selected. This error value also indicates the best DOA candidate θi at frequency f (as shown in
The signals received by a microphone pair may be processed as described herein to provide an estimated DOA, over a range of up to 180 degrees, with respect to the axis of the microphone pair. The desired angular span and resolution may be arbitrary within that range (e.g. uniform (linear) or nonuniform (nonlinear), limited to selected sectors of interest, etc.). Additionally or alternatively, the desired frequency span and resolution may be arbitrary (e.g. linear, logarithmic, mel-scale, Bark-scale, etc.).
In the model shown in
The DOA estimation principles described herein may also be extended to a two-dimensional (2-D) array of microphones. For example, a 2-D array may be used to extend the range of source DOA estimation up to a full 360° (e.g., providing a similar range as in applications such as radar and biomedical scanning). Such an array may be used in a speakerphone application, for example, to support good performance even for arbitrary placement of the telephone relative to one or more sources.
The multiple microphone pairs of a 2-D array typically will not share the same DOA, even for a far-field point source. For example, source height relative to the plane of the array (e.g., in the z-axis) may play an important role in 2-D tracking.
An expression such as
where θ1 and θ2 are the estimated DOA for pair 1 and 2, respectively, may be used to project all pairs of DOAs to a 360° range in the plane in which the three microphones are located. Such projection may be used to enable tracking directions of active speakers over a 360° range around the microphone array, regardless of height difference. Applying the expression above to project the DOA estimates (0°, 60°) of
which may be mapped to a combined directional estimate (e.g., an azimuth) of 270° as shown in
In a typical use case, the source will be located in a direction that is not projected onto a microphone axis.
For the example shown in
In fact, almost 3D information is given by a 2D microphone array, except for the up-down confusion. For example, the directions of arrival observed by microphone pairs MC10-MC20 and MC20-MC30 may also be used to estimate the magnitude of the angle of elevation of the source relative to the x-y plane. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis, the y-axis, and the x-y plane may be expressed as d sin(θ2), d sin(θ1), and d√{square root over (sin2(θ1)+sin2(θ2))}{square root over (sin2(θ1)+sin2(θ2))}, respectively. The magnitude of the angle of elevation may then be estimated as {circumflex over (θ)}h=cos−1√{square root over (sin2(θ1)+sin2(θ2))}{square root over (sin2(θ1)+sin2(θ2))}.
Although the microphone pairs in the particular examples of
The estimation of y may be performed using the projection p1=(d sin θ1 sin θ0, d sin θ1 cos θ0) of vector (x,y) onto axis 1. Observing that the difference between vector (x,y) and vector p1 is orthogonal to p1, calculate y as
The desired angles of arrival in the x-y plane, relative to the orthogonal x and y axes, may then be expressed respectively as
Extension of DOA estimation to a 2-D array is typically well-suited to and sufficient for a speakerphone application. However, further extension to an N-dimensional array is also possible and may be performed in a straightforward manner. For tracking applications in which one target is dominant, it may be desirable to select N pairs for representing N dimensions. Once a 2-D result is obtained with a particular microphone pair, another available pair can be utilized to increase degrees of freedom. For example,
Estimates of DOA error from different dimensions may be used to obtain a combined likelihood estimate, for example, using an expression such as
where θ0,i denotes the DOA candidate selected for pair i. Use of the maximum among the different errors may be desirable to promote selection of an estimate that is close to the cones of confusion of both observations, in preference to an estimate that is close to only one of the cones of confusion and may thus indicate a false peak. Such a combined result may be used to obtain a (frame, angle) plane, as described herein, and/or a (frame, frequency) plot, as described herein.
The DOA estimation principles described herein may be used to support selection among multiple speakers. For example, location of multiple sources may be combined with a manual selection of a particular speaker (e.g., push a particular button to select a particular corresponding user) or automatic selection of a particular speaker (e.g., by speaker recognition). In one such application, a telephone is configured to recognize the voice of its owner and to automatically select a direction corresponding to that voice in preference to the directions of other sources.
A source DOA may be easily defined in 1-D, e.g. from −90° to +90°. For more than two microphones at arbitrary relative locations, it is proposed to use a straightforward extension of 1-D as described above, e.g. (θ1, θ2) in two-pair case in 2-D, (θ1, θ2, θ3) in three-pair case in 3-D, etc.
A key problem is how to apply spatial filtering to such a combination of paired 1-D DOA estimates. In this case, a beamformer/null beamformer (BFNF) as shown in
As the approach shown in
where lp indicates the distance between the microphones of pair p, ω indicates the frequency bin number, and fs indicates the sampling frequency.
A PWBFNF scheme may be used for suppressing direct path of interferers up to the available degrees of freedom (instantaneous suppression without smooth trajectory assumption, additional noise-suppression gain using directional masking, additional noise-suppression gain using bandwidth extension). Single-channel post-processing of quadrant framework may be used for stationary noise and noise-reference handling.
It may be desirable to obtain instantaneous suppression but also to provide minimization of artifacts such as musical noise. It may be desirable to maximally use the available degrees of freedom for BFNF. One DOA may be fixed across all frequencies, or a slightly mismatched alignment across frequencies may be permitted. Only the current frame may be used, or a feed-forward network may be implemented. The BFNF may be set for all frequencies in the range up to the Nyquist rate (e.g., except ill-conditioned frequencies). A natural masking approach may be used (e.g., to obtain a smooth natural seamless transition of aggressiveness).
The methods and apparatus disclosed herein may be applied generally in any transceiving and/or audio sensing application, especially mobile or otherwise portable instances of such applications. For example, the range of configurations disclosed herein includes communications devices that reside in a wireless telephony communication system configured to employ a code-division multiple-access (CDMA) over-the-air interface. Nevertheless, it would be understood by those skilled in the art that a method and apparatus having features as described herein may reside in any of the various communication systems employing a wide range of technologies known to those of skill in the art, such as systems employing Voice over IP (VoIP) over wired and/or wireless (e.g., CDMA, TDMA, FDMA, and/or TD-SCDMA) transmission channels.
It is expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in networks that are packet-switched (for example, wired and/or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and/or circuit-switched. It is also expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in narrowband coding systems (e.g., systems that encode an audio frequency range of about four or five kilohertz) and/or for use in wideband coding systems (e.g., systems that encode audio frequencies greater than five kilohertz), including whole-band wideband coding systems and split-band wideband coding systems.
Examples of codecs that may be used with, or adapted for use with, transmitters and/or receivers of communications devices as described herein include the Enhanced Variable Rate Codec, as described in the Third Generation Partnership Project 2 (3GPP2) document C.S0014-C, v1.0, entitled “Enhanced Variable Rate Codec, Speech Service Options 3, 68, and 70 for Wideband Spread Spectrum Digital Systems,” February 2007 (available online at www-dot-3gpp-dot-org); the Selectable Mode Vocoder speech codec, as described in the 3GPP2 document C.S0030-0, v3.0, entitled “Selectable Mode Vocoder (SMV) Service Option for Wideband Spread Spectrum Communication Systems,” January 2004 (available online at www-dot-3gpp-dot-org); the Adaptive Multi Rate (AMR) speech codec, as described in the document ETSI TS 126 092 V6.0.0 (European Telecommunications Standards Institute (ETSI), Sophia Antipolis Cedex, FR, December 2004); and the AMR Wideband speech codec, as described in the document ETSI TS 126 192 V6.0.0 (ETSI, December 2004). Such a codec may be used, for example, to recover the reproduced audio signal from a received wireless communications signal.
The presentation of the described configurations is provided to enable any person skilled in the art to make or use the methods and other structures disclosed herein. The flowcharts, block diagrams, and other structures shown and described herein are examples only, and other variants of these structures are also within the scope of the disclosure. Various modifications to these configurations are possible, and the generic principles presented herein may be applied to other configurations as well. Thus, the present disclosure is not intended to be limited to the configurations shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure.
Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Important design requirements for implementation of a configuration as disclosed herein may include minimizing processing delay and/or computational complexity (typically measured in millions of instructions per second or MIPS), especially for computation-intensive applications, such as playback of compressed audio or audiovisual information (e.g., a file or stream encoded according to a compression format, such as one of the examples identified herein) or applications for wideband communications (e.g., voice communications at sampling rates higher than eight kilohertz, such as 12, 16, 32, 44.1, 48, or 192 kHz).
An apparatus as disclosed herein (e.g., any device configured to perform a technique as described herein) may be implemented in any combination of hardware with software, and/or with firmware, that is deemed suitable for the intended application. For example, the elements of such an apparatus may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Any two or more, or even all, of these elements may be implemented within the same array or arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips).
One or more elements of the various implementations of the apparatus disclosed herein may be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). Any of the various elements of an implementation of an apparatus as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions, also called “processors”), and any two or more, or even all, of these elements may be implemented within the same such computer or computers.
A processor or other means for processing as disclosed herein may be fabricated as one or more electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). Examples of such arrays include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs, FPGAs, ASSPs, and ASICs. A processor or other means for processing as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions) or other processors. It is possible for a processor as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to a procedure of an implementation described herein, such as a task relating to another operation of a device or system in which the processor is embedded (e.g., an audio sensing device). It is also possible for part of a method as disclosed herein to be performed by a processor of the audio sensing device and for another part of the method to be performed under the control of one or more other processors.
Those of skill will appreciate that the various illustrative modules, logical blocks, circuits, and tests and other operations described in connection with the configurations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Such modules, logical blocks, circuits, and operations may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC or ASSP, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to produce the configuration as disclosed herein. For example, such a configuration may be implemented at least in part as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a general purpose processor or other digital signal processing unit. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A software module may reside in a non-transitory storage medium such as RAM (random-access memory), ROM (read-only memory), nonvolatile RAM (NVRAM) such as flash RAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, or a CD-ROM; or in any other form of storage medium known in the art. An illustrative storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
It is noted that the various methods disclosed herein may be performed by an array of logic elements such as a processor, and that the various elements of an apparatus as described herein may be implemented as modules designed to execute on such an array. As used herein, the term “module” or “sub-module” can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions (e.g., logical expressions) in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions. When implemented in software or other computer-executable instructions, the elements of a process are essentially the code segments to perform the related tasks, such as with routines, programs, objects, components, data structures, and the like. The term “software” should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link.
Each of the tasks of the methods described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method as disclosed herein, an array of logic elements (e.g., logic gates) is configured to perform one, more than one, or even all of the various tasks of the method. One or more (possibly all) of the tasks may also be implemented as code (e.g., one or more sets of instructions), embodied in a computer program product (e.g., one or more data storage media such as disks, flash or other nonvolatile memory cards, semiconductor memory chips, etc.), that is readable and/or executable by a machine (e.g., a computer) including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The tasks of an implementation of a method as disclosed herein may also be performed by more than one such array or machine. In these or other implementations, the tasks may be performed within a device for wireless communications such as a cellular telephone or other device having such communications capability. Such a device may be configured to communicate with circuit-switched and/or packet-switched networks (e.g., using one or more protocols such as VoIP). For example, such a device may include RF circuitry configured to receive and/or transmit encoded frames.
It is expressly disclosed that the various methods disclosed herein may be performed by a portable communications device such as a handset, headset, or portable digital assistant (PDA), and that the various apparatus described herein may be included within such a device.
In one or more exemplary embodiments, the operations described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, such operations may be stored on or transmitted over a computer-readable medium as one or more instructions or code. The term “computer-readable media” includes both computer-readable storage media and communication (e.g., transmission) media. By way of example, and not limitation, computer-readable storage media can comprise an array of storage elements, such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, EEPROM, and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; CD-ROM or other optical disk storage; and/or magnetic disk storage or other magnetic storage devices. Such storage media may store information in the form of instructions or data structures that can be accessed by a computer. Communication media can comprise any medium that can be used to carry desired program code in the form of instructions or data structures and that can be accessed by a computer, including any medium that facilitates transfer of a computer program from one place to another. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and/or microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and/or microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray Disc™ (Blu-Ray Disc Association, Universal City, Calif.), where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
An acoustic signal processing apparatus as described herein may be incorporated into an electronic device that accepts speech input in order to control certain operations, or may otherwise benefit from separation of desired noises from background noises, such as communications devices. Many applications may benefit from enhancing or separating clear desired sound from background sounds originating from multiple directions. Such applications may include human-machine interfaces in electronic or computing devices which incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice-activated control, and the like. It may be desirable to implement such an acoustic signal processing apparatus to be suitable in devices that only provide limited processing capabilities.
It is possible for one or more elements of an implementation of an apparatus as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to an operation of the apparatus, such as a task relating to another operation of a device or system in which the apparatus is embedded. It is also possible for one or more elements of an implementation of such an apparatus to have structure in common (e.g., a processor used to execute portions of code corresponding to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an arrangement of electronic and/or optical devices performing operations for different elements at different times).
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Such devices might include PCs, network servers, and handheld devices, for example.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims
1. A system which performs social interaction analysis for a plurality of participants, comprising:
- a processor configured to: determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs, determine a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs, and display an output representative of the social interaction between the participants;
- wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.
2. The system of claim 1, wherein the output is displayed in real-time as the participants are interacting with each other.
3. The system of claim 1, wherein the output comprises an interaction graph comprising:
- a plurality of identifiers, each identifier corresponding to a respective participant; and
- a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants.
4. The system of claim 3, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted.
5. The system of claim 3, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants.
6. The system of claim 3, wherein each of the participants is a speaker.
7. The system of claim 3, wherein the interaction graph is used to assess group dynamics or topic dynamics.
8. The system of claim 3, wherein the interaction graph indicates social interaction information among the participants.
9. The system of claim 8, wherein the social interaction information is accumulated over a period of time.
10. The system of claim 3, wherein the interaction graph is displayed on a smartphone.
11. The system of claim 3, wherein the interaction graph is displayed on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook.
12. The system of claim 3, wherein each indicator represents active participant location and energy.
13. The system of claim 12, further comprising an additional indicator that represents a refined active participant location and energy.
14. The system of claim 12, wherein the indicators comprise beam patterns.
15. The system of claim 1, wherein the processor is further configured to perform real-time meeting analysis of a meeting the participants are participating in.
16. The system of claim 1, wherein the processor is further configured to generate a personal time line for a participant that shows an interaction history of the participant with respect to the other participants, a meeting topic, or a subject matter.
17. The system of claim 1, wherein the processor is further configured to generate participant interaction statistics over time.
18. The system of claim 1, wherein the processor is further configured to generate an evolution of interaction between participants over time.
19. The system of claim 1, wherein the processor is further configured to generate an interaction graph among the participants.
20. The system of claim 1, further comprising a user interface that is configured for collaboratively zooming into one of the participants in real-time.
21. A method for performing social interaction analysis for a plurality of participants, comprising:
- determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs;
- determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and
- displaying an output representative of the social interaction between the participants;
- wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.
22. The method of claim 21, further comprising displaying the output in real-time as the participants are interacting with each other.
23. The method of claim 21, wherein the output comprises an interaction graph comprising:
- a plurality of identifiers, each identifier corresponding to a respective participant; and
- a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants.
24. The method of claim 23, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted.
25. The method of claim 23, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants.
26. The method of claim 23, wherein each of the participants is a speaker.
27. The method of claim 23, further comprising using the interaction graph to assess group dynamics or topic dynamics.
28. The method of claim 23, wherein the interaction graph indicates social interaction information among the participants.
29. The method of claim 28, further comprising accumulating the social interaction information over a period of time.
30. The method of claim 23, further comprising displaying the interaction graph on a smartphone.
31. The method of claim 23, further comprising displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook.
32. The method of claim 23, wherein each indicator represents active participant location and energy.
33. The method of claim 23, further comprising an additional indicator that represents a refined active participant location and energy.
34. The method of claim 23, wherein the indicators comprise beam patterns.
35. The method of claim 21, further comprising performing real-time meeting analysis of a meeting the participants are participating in.
36. The method of claim 21, further comprising generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter.
37. The method of claim 21, further comprising generating participant interaction statistics over time.
38. The method of claim 21, further comprising generating an evolution of interaction between participants over time.
39. The method of claim 21, further comprising generating an interaction graph among the participants.
40. The method of claim 21, further comprising collaboratively zooming into one of the participants in real-time.
41. An apparatus for performing social interaction analysis for a plurality of participants, comprising:
- means for determining a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs;
- means for determining a social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and
- means for displaying an output representative of the social interaction between the participants;
- wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.
42. The apparatus of claim 41, further comprising means for displaying the output in real-time as the participants are interacting with each other.
43. The apparatus of claim 41, wherein the output comprises an interaction graph comprising:
- a plurality of identifiers, each identifier corresponding to a respective participant; and
- a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants.
44. The apparatus of claim 43, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted.
45. The apparatus of claim 43, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants.
46. The apparatus of claim 43, wherein each of the participants is a speaker.
47. The apparatus of claim 43, further comprising means for using the interaction graph to assess group dynamics or topic dynamics.
48. The apparatus of claim 43, wherein the interaction graph indicates social interaction information among the participants.
49. The apparatus of claim 48, further comprising means for accumulating the social interaction information over a period of time.
50. The apparatus of claim 43, further comprising means for displaying the interaction graph on a smartphone.
51. The apparatus of claim 43, further comprising means for displaying the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook.
52. The apparatus of claim 43, wherein each indicator represents active participant location and energy.
53. The apparatus of claim 52, further comprising an additional indicator that represents a refined active participant location and energy.
54. The apparatus of claim 52, wherein the indicators comprise beam patterns.
55. The apparatus of claim 41, further comprising means for performing real-time meeting analysis of a meeting the participants are participating in.
56. The apparatus of claim 41, further comprising means for generating a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter.
57. The apparatus of claim 41, further comprising means for generating participant interaction statistics over time.
58. The apparatus of claim 41, further comprising means for generating an evolution of interaction between participants over time.
59. The apparatus of claim 41, further comprising means for generating an interaction graph among the participants.
60. The apparatus of claim 41, further comprising means for collaboratively zooming into one of the participants in real-time.
61. A non-transitory computer-readable medium comprising computer-readable instructions for causing a processor to:
- determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs;
- determine a social interaction between a plurality of participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs; and
- display an output representative of the social interaction between the plurality of participants;
- wherein the first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant.
62. The computer-readable medium of claim 61, further comprising instructions for causing the processor to display the output in real-time as the participants are interacting with each other.
63. The computer-readable medium of claim 61, wherein the output comprises an interaction graph comprising:
- a plurality of identifiers, each identifier corresponding to a respective participant; and
- a plurality of indicators, each indicator providing information relating to at least one of: a participant looking at another participant, a strength of an interaction between two participants, a participation level of a participant, or a leader of a group of participants.
64. The computer-readable medium of claim 63, wherein the strength of the interaction between two participants is based on a time that the two participants have interacted.
65. The computer-readable medium of claim 63, wherein the indicators have at least one of a direction, a thickness, or a color, wherein the direction indicates which participant is looking at another participant, the thickness indicates the strength of the interaction between two participants, and the color indicates the leader of the group of participants.
66. The computer-readable medium of claim 63, wherein each of the participants is a speaker.
67. The computer-readable medium of claim 63, further comprising instructions for causing the processor to use the interaction graph to assess group dynamics or topic dynamics.
68. The computer-readable medium of claim 63, wherein the interaction graph indicates social interaction information among the participants.
69. The computer-readable medium of claim 68, further comprising instructions for causing the processor to accumulate the social interaction information over a period of time.
70. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on a smartphone.
71. The computer-readable medium of claim 63, further comprising instructions for causing the processor to display the interaction graph on at least one from among the group comprising a handset, a laptop, a tablet, a computer, and a netbook.
72. The computer-readable medium of claim 63, wherein each indicator represents active participant location and energy.
73. The computer-readable medium of claim 72, further comprising an additional indicator that represents a refined active participant location and energy.
74. The computer-readable medium of claim 72, wherein the indicators comprise beam patterns.
75. The computer-readable medium of claim 61, further comprising instructions for causing the processor to perform real-time meeting analysis of a meeting the participants are participating in.
76. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate a personal time line for a participant that shows an interaction history of the participant with respect to other participants, a meeting topic, or a subject matter.
77. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate participant interaction statistics over time.
78. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an evolution of interaction between participants over time.
79. The computer-readable medium of claim 61, further comprising instructions for causing the processor to generate an interaction graph among the participants.
80. The computer-readable medium of claim 61, further comprising instructions for causing the processor to collaboratively zoom into one of the participants in real-time.
Type: Application
Filed: Nov 12, 2012
Publication Date: Nov 14, 2013
Applicant: QUALCOMM Incorporated (San Diego, CA)
Inventors: Lae-Hoon Kim (San Diego, CA), Jongwon Shin (Buk-gu), Erik Visser (San Diego, CA)
Application Number: 13/674,773