SYSTEM AND METHOD OF SMART AUDIO LOGGING FOR MOBILE DEVICES
A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.
A claim of priority is made to U.S. Provisional Application No. 61/322,176 entitled “SMART AUDIO LOGGING” filed Apr. 8, 2010, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
BACKGROUNDI. Field
The present disclosure generally relates to audio and speech signal capturing. More specifically, the disclosure relates to mobile devices capable of initiating and/or terminating audio and speech signal capturing operations, or interchangeably logging operation, based on the analysis of audio context information.
II. Description of Related Art
Thanks to the power control technology advance in Application Specific Integrated Circuits (ASIC) and increased computational power of mobile processors such as Digital Signal Processor (DSP) or microprocessors, an increasing number of mobile devices are now capable of enabling much more complex features which were not regarded as feasible until recently due to the lack of required computational power or hardware (HW) support. For example, mobile stations (MS) or mobile phones were initially developed to enable voice or speech communication over traditional circuit-based wireless cellular networks. Thus, MS was originally designed to address fundamental voice applications like voice compression, acoustic echo cancellation (AEC), noise suppression (NS), and voice recording.
The process of implementing a voice compression algorithm is known as vocoding and the implementing apparatus is known as a vocoder or “speech coder.” Several standardized vocoding algorithms exist in support of the different digital communication systems which require speech communication. The 3rd Generation Partnership Project 2 (3GPP2) is an example standardization organization which specifies Code Division Multiple Access (CDMA) technology such as IS-95, CDMA2000 1x Radio Transmission Technology (1xRTT), and CDMA2000 Evolution-Data Optimized (EV-DO) communication systems. The 3rd Generation Partnership Project (3GPP) is another example standardization organization which specifies the Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), High-Speed Downlink Packet Access (HSDPA), High-Speed Uplink Packet Access (HSUPA), High-Speed Packet Access Evolution (HSPA+), and Long Term Evolution (LTE). The Voice over Internet Protocol (VOIP) is an example protocol used in the communication systems defined in 3GPP and 3GPP2, as well as others. Examples of vocoders employed in such communication systems and protocols include International Telecommunications Union (ITU)-T G.729, Adaptive Multi-Rate (AMR) codec, and Enhanced Variable Rate Codec (EVRC) speech service options 3, 68, and 70.
Voice recording is an application to record human voice. Voice recording is often referred to as voice logging or voice memory interchangeably. Voice recording allows users to save some portion of a speech signal picked up by one or more microphones into a memory space. The saved voice recording can be played later in the same device or it can be transmitted to a different device through a voice communication system. Although voice recorders can record some music signals, the quality of recorded music is typically not superb because the voice recorder is optimized for speech characteristics uttered by a human vocal tract.
Audio recording or audio logging is sometimes used interchangeably with voice recording but it is sometimes understood as a different application to record any audible sound including human voice, instruments and music because of its ability to capture higher frequency signals than that generated by the human vocal tract. In the context of the present application, “audio logging” or “audio recording” terminology will be broadly used to refer to voice recording or audio recording.
Audio logging enables recording of all or some portions of an audio signal of interest which are typically picked up by one or more microphones in one or more mobile devices. Audio logging is sometimes referred to as audio recording or audio memo interchangeably.
SUMMARYThis document describes a method of processing a digital audio signal for a mobile device. This method includes receiving acoustic signal by at least one microphone; converting the received acoustic signal into the digital audio signal; extracting at least one auditory context information from the digital audio signal; in response to automatically detecting a start event indicator, performing an audio logging for the digital audio signal; and in response to automatically detecting an end event indicator, ending the audio logging. This at least one auditory context information may be related to audio classification, keyword identification, or speaker identification. This at least one auditory context information may be based at least in part on signal energy, signal-to-noise ratio, spectral tilt, or zero-crossing rate. This at least one auditory context inforrnation may be based at least in part on non-auditory information such as scheduling information or calendaring information. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method of processing a digital audio signal for a mobile device. This method includes receiving acoustic signal by at least one microphone; transforming the received acoustic signal into an electrical signal; sampling the electrical signal based on a sampling frequency and a data width for each sampled data to obtain the digital audio signal; storing the digital audio signal into a buffer; extracting at least one auditory context information from the digital audio signal; in response to automatically detecting a start event indicator, performing an audio logging for the digital audio signal; and in response to automatically detecting an end event indicator, ending the audio logging. This detecting the start or end event indicators may be based at least in part on non-auditory information such as scheduling information or calendaring information. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method of detecting a start event indicator. This method includes selecting at least one context information from the at least one auditory context information; comparing the selected context information with at least one pre-determined thresholds; and determining if the start event indicator has been detected based on the comparing the selected context information with at least one pre-determined thresholds. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method of detecting an end event indicator. This method includes selecting at least one context information from the at least one auditory context information; comparing the selected context information with at least one pre-determined thresholds; and determining if the end event indicator has been detected based on the comparing the selected context information with at least one pre-determined thresholds. This detecting an end event indicator may be based at least in part on non-occurrence of auditory event during pre-determined period of time. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method of performing the audio logging. This method includes updating at least one parameter related with the converting based at least in part on the at least one auditory context information; in response to determining if an additional processing is required based at least in part on the at least one auditory context information, applying the additional processing to the digital audio signal to obtain processed audio signal; and storing the processed audio signal into a memory storage. The additional processing may be signal enhancement processing such as acoustic echo cancellation (AEC), receiving voice enhancement (RVE), active noise cancellation (ANC), noise suppression (NS), acoustic gain control (AGC), acoustic volume control (AVC), or acoustic dynamic range control (ADRC). The noise suppression may be based on single-microphone or multiple-microphones based solution. The additional processing may be signal compression processing such as speech compression or audio compression. The compression parameters such as compression mode, bitrate, or channel number may be determined based on the auditory context information. The memory storage includes a local memory inside the mobile device or a remote memory connected to the mobile device through a wireless channel. The selection between the local memory and the remote memory may be based at least in part on the auditory context information. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method for a mobile device which includes automatically detecting a start event indicator; processing first portion of audio input signal to obtain first information in response to the detecting of a start event indicator; determining at least one recording parameter based on the first information; and reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter. This reconfiguring may occurs during an inactive portion of the audio input signal. This at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the mobile device. This at least one recording parameter includes information indicative of the number of active microphone of the mobile device or timing information indicative of at least one microphone's wake up interval or active duration. This first information may be context information describing an environment in which the mobile device is recording or a characteristic of the audio input signal. This start event indicator may be based on a signal transmitted over a wireless channel. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method for a mobile device which includes automatically detecting a start event indicator; processing first portion of audio input signal to obtain first information in response to the detecting of a start event indicator; determining at least one recording parameter based on the first information; reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter; processing second portion of the audio input signal to obtain second information; enhancing the audio input signal by suppressing a background noise to obtain an enhanced signal; encoding the enhanced signal to obtain an encoded signal; and storing the encoded signal at a local storage within the mobile device. This encoding the enhanced signal includes determining an encoding type based on the second information; determining at least one encoding parameter for the determined encoding; and processing the enhanced signal based on the determined encoding type and the determined at least one encoding parameter to obtain the encoded signal. This herein the at least one encoding parameter includes bitrate or encoding mode. In addition, this method may include determining a degree of the enhancing the audio input signal based on the second information. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
This document also describes a method for a mobile device which includes automatically detecting a start event indicator; processing first portion of audio input signal to obtain first information in response to the detecting of a start event indicator; determining at least one recording parameter based on the first information; reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter; processing second portion of the audio input signal to obtain second information; enhancing the audio input signal by suppressing a background noise to obtain an enhanced signal; encoding the enhanced signal to obtain an encoded signal; and storing the encoded signal at a local storage within the mobile device. In addition, this method may include automatically detecting an end event indicator; and in response to the detecting an end event indicator, determining a long-term storage location for the encoded signal between the local storage within the mobile device and a network storage connected to the mobile device through a wireless channel. This determining the long-term storage location may be based on a priority of the encoded signal. This document also describes an apparatus, a combination of means, and a computer-readable medium relating to this method.
The aspects and the attendant advantages of the embodiments described herein will become more readily apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein:
The present application will be better understood by reference to the accompanying drawings.
Unless expressly limited by its context, the term “signal” is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term “generating” is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term “calculating” is used herein to indicate any of its ordinary meanings, such as computing, evaluating, and/or selecting from a set of values. Unless expressly limited by its context, the term “obtaining” is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Where the term “comprising” is used in the present description and claims, it does not exclude other elements or operations. The term “based on” (as in “A is based on B”) is used to indicate any of its ordinary meanings, including the cases (i) “based on at least” (e.g., “A is based on at least B”) and, if appropriate in the particular context, (ii) “equal to” (e.g., “A is equal to B”).
Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). Unless indicated otherwise, the term “context” (or “audio context”) is used to indicate a component of an audio or speech and conveys information from the ambient environment of the speaker, and the term “noise” is used to indicate any other artifact in the audio or speech signal.
The smart audio logging system may be configured to perform smart start 115 or smart end 150 of audio logging. In comparison to a conventional audio logging system in which a user manually initiates or ends recording of the audio signal, the smart audio logging system may be configured to start or end audio logging by automatically detecting a start event indicator or an end event indicator. These indicators may be based on the context information derived from the audio signal; databases located within the mobile device or connected to the mobile device through wired or wireless network connections; non-acoustic sensors; or even a signaling from other smart audio logging devices. Alternatively, these indicators may be configured to include a user's voice command or key command as well. In one embodiment, the end event indicator may be configured to be based on non-occurrence of auditory event during pre-determined period of time. The detection of the start event indicator and the end event indicator may include the steps of selecting at least one particular context information out of at least one auditory context information; comparing the selected context information with at least one pre-determined thresholds, and determining if the start or end event indicators have been detected based on the comparison.
The smart audio logging system may be configured to comprise a number of smart sub-blocks, or interchangeably, smart building blocks based at least in part on the at least one auditory context information. The smart building block may be characterized by its ability to dynamically configure its own operational mode or functional parameters during the audio logging process in contrast to conventional audio logging in which configuration or operational mode may be pre-determined or statically determined during the operation.
For instance, in one embodiment of smart audio logging, the smart microphone control block 120 of
In another embodiment, the smart audio enhancement block 130 of
It should be noted that the smart building blocks 120, 125, 130, 135, 145 and the order thereof disclosed in
The smart audio logging system may also refer to the system that may be configured to use the combination of some of existing conventional audio logging system and some of either smart building blocks or smart start/end of logging feature as it was presented in
Auditory Event S210 refers generally to audio signal or particularly to the audio signal of interest to a user. For instance, the Auditory Event S210 may include, but not limited to, the presence of speech signal, music, specific background noise characteristics, or specific keywords. The Auditory Event S210 is often referred to as “auditory scene” in the art.
The Audio Capturing Unit 215 may include at least one microphone or at least one A/D converter. At least one microphone or at least one A/D converter might have been part of a conventional audio logging system and may be powered up only during the active usage of mobile device. For example, a traditional audio capturing unit in the conventional system may be configured to be powered up only during the entire voice call or entire video recording in response to the user's selection of placing or receiving the call, or pressing the video recording start button.
In the present application, however, the Audio Capturing Unit 215 may be configured to intermittently wake up, or power up, even during idle mode of the mobile device in addition to during a voice call or during the execution of any other applications that might require active usage of at least one microphone. The Audio Capturing Unit 215 may even be configured to stay powered up, continuously picking up an audio signal. This approach may be referred to as “Always On.” The picked-up audio signal S260 may be configured to be stored in Buffer 220 in a discrete form.
The “idle mode” of the mobile device described herein generally refers to the status in which the mobile device is not actively running any application in response to user's manual input unless specified otherwise. For example, typical mobile devices send or receive signals periodically to and from one or more base stations even without the user's selection. The status of mobile device performing this type of activity is regarded as idle mode within the scope of the present application. When the user is actively engaging in voice communication or video recording using his or her mobile device, it is not regarded as idle mode.
The Buffer 220 stores digital audio data temporarily before the digital audio data is processed by the Audio Logging Processor 230. The Buffer 220 may be any physical memory and, although it is preferable to be located within the mobile device due to faster access advantages and relatively small required memory footprint from the Audio Capturing Unit 215, the Buffer 220 also could be located outside of mobile devices via wireless or wired network connections. In another embodiment, the picked-up audio signal S260 may be configured to be directly connected to the Audio Logging Processor 230 without temporarily being stored in the Buffer 220. In such a case, the picked-up audio signal S260 may be identical to the Audio Input S270.
The Audio Logging Processor 230 is a main processing unit for the smart audio logging system. It may be configured to make various decisions with respect to when to start or end logging or how to configure the smart building blocks. It may be further configured to control adjacent blocks, to interface with Input Processing Unit 250 or Output Processing Unit 240, to determine the internal state of smart audio logging system, and to access to Auxiliary Data Unit 280 or databases. One example of an embodiment of the Audio Logging Processor 230 is presented in
The Auxiliary Data Unit 280 may include various databases or application programs and it may be configured to provide additional information which may be used in part or in whole by the Audio Logging Processor 230. In one embodiment, the Auxiliary Data Unit 280 may include scheduling information of the owner of the mobile device equipped with the smart audio logging feature. In such case, the scheduling information may, for example, include following details: “the time and/or duration of next business meeting,” “invited attendees,” “location of meeting place,” or “subject of the meeting” to name a few. In one embodiment, the scheduling information may be obtained from calendaring application such as Microsoft Outlook or any other commercially available Calendar applications. Upon receiving or actively retrieving these types of details from the Auxiliary Data Unit 280, the Audio Logging Processor 230 may be configured to make decisions regarding when to start or stop audio logging according to the details preferably in combination with the context information S600 extracted from the discrete audio input data stored in the Buffer 220.
Storage generally refers to one or more memory locations in the system which is designed to store the processed audio logging from the Audio Logging Processor 230. The Storage may be configured to comprise Local Storage 270 which is locally available inside mobile devices or Remote Storage 290 which is remotely connected to mobile devices via wired or wireless communication channel. The Audio Logging Processor 230 may be configured to select where to store the processed audio loggings between the Local Storage 270 and the Remote Storage 290. The storage selection may be made according to various factors which may include but not limited to the context information S600, the estimated size of audio loggings, available memory size, network speed, the latency of the network, or the priority of the context information S600. The storage selection may even be configured to be switched between the Local Storage 270 and the Remote Storage 290 dynamically during active audio logging process if necessary.
Auditory Activity Detector 510 module or “audio detector” may detect the level of audio activity from the Audio Input S270. The audio activity may be defined as binary classification, such as active or non-active, or as more level of classification if necessary. Various methods to determine the audio level of the Audio Input S270 may be used. For example, the Auditory Activity Detector 510 may be based on signal energy, signal-to-noise ratio (SNR), periodicity, spectral tilt, and/or zero-crossing rate. But it is preferable to use relatively simple solutions in order to maintain a computational complexity as low as possible which in turn helps to extend battery life. Audio Quality Enhancer 520 module may improve the quality of the Audio Input S270 by suppressing background noise actively or passively; by cancelling acoustic echo; by adjusting input gain; or by improving the intelligibility of the Audio Input S270 for conversational speech signal.
Aux Signal Analyzer 530 module may analyze the auxiliary signal from the Auxiliary Data Unit 280. For example, the auxiliary signal may include a scheduling program such as calendaring program or email client program. It may also include additional databases such as dictionary, employee profile, or various audio and speech parameters obtained from 3rd party source or training data. Input Signal Handler 540 module may detect, process, or analyze the Input Signal S220 from the Input Processing Unit 250. Output Signal Handler 590 module may generate the Output Signal S230 accordingly to the Output Processing Unit 240.
Control Signal Handler 550 handles various control signals that may be applied to peripheral units of the smart audio logging system. Two examples of the control signals, A/D Converter Control S215 and Microphone Unit Control S205, are disclosed in
General Audio Signal Processor 595 is a multi-purpose module for handling all other fundamental audio and speech signal processing methods not explicitly presented in the present application but still necessary for successful implementation. For example, these signal processing methods may include but not limited to time-to-frequency or frequency-to-time conversions; miscellaneous filtering; signal gain adjustment; or dynamic range control. It should be noted that each module disclosed separately in
The Music/Speech Detector 820 also may be configured to classify the input signal into multi-level classification. For example, in one embodiment of the Music/Speech Detector 820, it may classify the input signal into first-level classification such as “Music,” or “Speech,” or “Music+Speech.” Subsequently, it may further determine second-level classification such as “Rock,” “Pop,” or “Classic,” for the signal classified as “Music” at the first-level classification stage. In the same manner, it may also determine a second-level classification such as “Business Conversation,” “Personal Conversation,” or “Lecture,” for the signal classified as “Speech” at the first-level classification stage.
Speaker Identifier 830 may be configured to detect the identification of speaker for speech signal input. Speaker identification process may be based on characteristic of input speech signal such as signal or frame energy, signal-to-noise ratio (SNR), periodicity, spectral tilt, and/or zero-crossing rate. The Speaker Identifier 830 may be configured to identify simple classification such as “Male Speaker” or “Female Speaker”; or to identify more sophisticated information such as name or title of the speaker. Identifying the name or title of the speaker could require extensive computational complexity. It becomes even more challenging when the Speaker Identifier 830 has to search large number of speech samples for various reasons.
For example, let us assume the following hypothetical situation. Company X has overall 15,000 of employees and a user Y has to attend a series of work-related audio conference meetings per day using his mobile device equipped with smart audio logging feature. The user Y wants to identify speakers in real-time when a number of speakers, employees of the company X, involved in conversation. First, speech samples or speech characteristics extracted from the speech samples may not be available in the first place for all employees. Second, even if they are already available in the local memory or at the remote server side connected via wireless channel, searching that large number of speech samples in real time at the mobile device may be extremely challenging. Third, even if the searching may be done at the remote server side and the computing power of the server may be significantly higher than that of the mobile device, the real-time processing still could be challenging considering Rx/Tx transmission latency. These problems may become manageable if additional information is available from an auxiliary database. For example, if the list of conference participants is available from calendaring program, the Speaker Identifier may effectively reduce the number of people to be searched significantly by narrowing down the search space.
Environment Detector 850 may be configured to identify an auditory scene based on one or more characteristics of input speech signal such as frame energy, signal-to-noise ratio (SNR), periodicity, spectral tilt, and/or zero-crossing rate. For example, it may identify the environment of the current input signal as “Office,” “Car,” “Restaurant,” “Subway,” “Ball Park,” and so on.
Noise Classifier 840 may be configured to classify the characteristics of background noise of the Audio Input S270. For example, it may identify the background noise as “Stationary vs. Non-stationary,” “Street noise,” “Air plane noise,” or combination thereof. It may classify the background noise based on severity level of it such as “Severe” or “Medium.” The Noise Classifier 840 may be configured to classify the input in a single state processing or multi-stage processing.
Emotion Detector 850 may be configured to detect the emotion of a speaker for conversational speech or the emotional aspect of music content. Music consists of a number of interesting acoustic parameters. For example, music may include rhythms, instruments, tones, vocals, timbres, notes, and lyrics. These parameters may be used to detect or estimate the emotion of a speaker for one or more emotion categories such as happiness, anger, fear, victory, anxiety, or depression. Engaging Activity Detector 870 may be configured to detect the activity of the speaker based on the characteristics of the Audio Input S270. For example, it may detect that the speaker is “Talking,” “Running,” “Walking,” “Playing sports,” “In class,” or “Shopping.” The detection may be based on speech parameters and/or music signal parameters. The detection may also be configured to get the supplementary information from the Auxiliary Data Unit 280 or the other modules in
The Aux Signal Analyzer 530 may also generate an internal triggering signal according to the schedule of the user's calendaring program. A specific meeting that the user wanted to record may automatically generate the internal triggering signal without any manual intervention from the user. Alternatively, Aux Signal Analyzer 530 may be configured to decide such decisions based on explicit or implicit priorities of the meeting. The generation of the internal triggering signal may be initiated from inputs other than the analysis of the Audio Input S270 or Aux Signal. Such inputs may include the user's voice or manual key controls; timer; signal from non-acoustic sensors such as camera, timer, GPS, proximity sensor, Gyro, ambient sensor, or accelerometer; or the signal transmitted from another at least one smart audio logging system. Combinatorial Logic 900 may be configured to generate the Start Event Indicator S910 based on certain combination mechanisms of the internal triggering signals. For example, combinatorial logic may be configured to generate the Start Event Indicator S910 according to OR operation or AND operation of the internal triggering signals from the Auditory Activity Detector 510, the Aux Signal Analyzer 530, or the Input Signal Handler 540. In another embodiment, it may be configured to generate the Start Event Indicator S910 when one or more internal triggering signals have been set or triggered.
Referring back to
The state could be changed from the Passive Audio Monitoring State S1 to the Active Audio Monitoring State S2 upon triggering of the 1st-level Start Event Indicator S920. During the Active Audio Monitoring State S2, the smart audio logging system may be configured to wake up one or more extra modules, for example, such as the Context Identifier 560 or the Context Evaluation Logic 950. These extra modules may be used to provide in-depth monitoring and analysis of the Audio Input S270 signal to determine if the 2nd-level Start Event Indicator S930 is required to be triggered according to the description presented in
The state could be changed from the Audio Monitoring State S4 to the Active Audio Logging State S5 upon triggering of the Start Event Indicator S910. During the Active Audio Logging State S5, the actual audio logging will follow. The detailed description of typical operation in each state will be presented in the following paragraphs. If the End Event Indicator S940 is triggered during the Active Audio Logging State S5, the system may be configured to stop audio logging and switch the state back to the Audio Monitoring State S4.
Additionally,
A microphone may be configured to wake up at every T1 interval, microphone wake up interval, and collect the Audio Input S270 for T2 duration, microphone ON duration. The values of T1 or T2 may be pre-determined at a fixed interval or may be dynamically adapted during run time. In an exemplary implementation of the system, T1 may be bigger than T2 or T2 may be determined to be smaller but proportional to T1. If there is more than one microphone in the Microphone Unit 200, each microphone may be configured to have the same interval or some microphone may be configured to have different intervals as to others. In one embodiment, some of microphones may not be turned on at all during the Passive Audio Monitoring State S1 of
Digitized audio inputs during T2 duration may be stored to the Buffer 220 at every T1 interval and the stored digital audio input may be accessed and processed by the Audio Logging Processor 230 at every T3 interval. This may be better understood with
One skilled in the art would recognize that the order of blocks in
A microphone may be configured to wake up at every T4 interval; the microphone wake up interval, and collect the Audio Input S270 for T5 duration; the microphone ON duration. The values of T4 or T5 may be identical or substantially similar to the values of T1 or T2, respectively. However, it may be preferable to set T4 to be smaller than T1 because it may be beneficial for the Audio Logging Processor 230 to extract more accurate context information S600. In another embodiment, the values of T4 or T5 may be pre-determined at a fixed interval or may be dynamically adapted during run time. In another embodiment in which there are a plurality of microphones in the Microphone Unit 200, one or more microphones may be turned on constantly, which may be the mere special case in which T4 is identical to T5.
The system may also be configured to dynamically select the memory location 2420 based on the context information S600. For example, the system may be configured to store the audio logging data to storage which is remotely connected at the server side when one or more speakers during the conversation turns out to meet a certain profile such as a major business customers, or when the Audio Input S270 substantially includes more music than speech signal. In such cases it may be desirable to use a higher resolution of the A/D converter and therefore require a larger storage space.
The Audio Logging Processor 230 then may be configured to read the audio data 2424 from the Buffer 220. The new Context Information may be identified 2430 from the latest audio data and the new Context Information may be stored 2435 in memory. In another embodiment, the Context Identification process 2430 or the saving process 2434 of the context information S600 may be skipped or re-positioned in a different order as opposed to other blocks in the flowchart within the scope of general principle disclosed herein.
The Audio Logging Processor 230 may be configured to determine 2440 if enhancement of the Audio Input S270 signal is desirable or in such case what types of enhancement processing may be desirable before the processed signal is stored in the selected memory. The determination may be based on the context information S600 or pre-configured automatically by the system or manually by the user. Such enhancement processing may include acoustic echo cancellation (AEC), receiving voice enhancement (RVE), active noise cancellation (ANC), noise suppression (NS), acoustic gain control (AGC), acoustic volume control (AVC), or acoustic dynamic range control (ADRC). In one embodiment, the aggressiveness of signal enhancement may be based on the content of the Audio Input S270 or the context information S600.
The Audio Logging Processor 230 may be configured to determine 2445 if compression of the Audio Input S270 signal is desirable or in such case what types of compression processing may be desirable before the processed signal is stored in the selected memory location. The determination may be based on the context information S600 or pre-configured automatically by the system or manually by the user. For example, the system may select to use compression before audio logging starts based on the expected duration of audio logging preferably based on the calendaring information. The selection of a compression method such as speech coding or audio coding may be dynamically configured based upon the content of the Audio Input S270 or the context information S600. Unless specified otherwise, the compression within the context of the present application may mean source coding such as speech encoding/decoding and audio encoding/decoding. Therefore, it should be obvious for one skilled in the art that the compression may be used interchangeably as encoding and decompression may be used interchangeably as decoding. The encoding parameters such as bitrate, encoding mode, or the number of channel may be also dynamically configured based on the content of the Audio Input S270 or the context information S600.
During the Active Audio Logging State S3 S5, the number active microphones may be configured to change dynamically according to the context information S600. For example, the active number of microphone may be configured to increase from one 3045 to two 3050 upon detection of specific context information S600 or high priority context information S600. In another example, the microphone number may be configured to increase when the characteristics of background noise change from stationary to non-stationary or from mild-level to severe-level. In such a case, a multi-microphone-based noise suppression method may be able to increase the quality of the Audio Input S270. The increase or decrease of the number of active microphones may also be based on the quality of the Audio Input S270. The number of microphones may increase with the quality of the Audio Input S270, for example according to the signal-to-ratio (SNR) of the Audio Input S270, degrades below a certain threshold.
The storage of audio logging may be configured to be changed dynamically between local storage and remote storage during the actual audio logging process or after the completion of audio logging. For example,
Expiration time setting may be determined at the time of audio logging or after completion of audio. In one embodiment, each audio logging may be assigned a priority value according to the characteristics or statistics of context information S600 of the audio logging. For instance, the audio logging #1 3340 in
The precision setting for A/D converter unit may be configured to be changed dynamically during the Active Audio Logging State S3 based on the context information S600.
The coding format may be configured to be changed as well according to the context information S600.
For instance, the present audio codec #1 3810 may be configured to be changed to the speech codec #1 3820. Upon detection of the major signal classification change from “Music” to “Speech.” In another embodiment, the coding format change, if at all, may be triggered only after “no compression mode” 3830 or alternatively it may be triggered anytime upon detection of the pre-defined context information S600 change without “no compression mode” 3830 in between.
Various exemplary configurations are provided to enable any person skilled in the art to make or use the methods and other structures disclosed herein. The flowcharts, block diagrams, and other structures shown and described herein are examples only, and other variants of these structures are also within the scope of the disclosure. Various modifications to these configurations are possible, and the generic principles presented herein may be applied to other configurations as well. For example, it is emphasized that the scope of this disclosure is not limited to the illustrated configurations. Rather, it is expressly contemplated and hereby disclosed that features of the different particular configurations as described herein may be combined to produce other configurations that are included within the scope of this disclosure, for any case in which such features are not inconsistent with one another. It is also expressly contemplated and hereby disclosed that where a connection is described between two or more elements of an apparatus, one or more intervening elements (such as a filter) may exist, and that where a connection is described between two or more tasks of a method, one or more intervening tasks or operations (such as a filtering operation) may exist.
The configurations described herein may be implemented in part or in whole as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a computer-readable medium as machine-readable code, such code being instructions executable by an array of logic elements such as a microprocessor or other digital signal processing unit. The computer-readable medium may be an array of storage elements such as semiconductor memory (which may include without limitation dynamic or static RAM (random-access memory), ROM (read-only memory), and/or flash RAM), or ferroelectric, polymeric, or phase-change memory; a disk medium such as a magnetic or optical disk; or any other computer-readable medium for data storage. The term “software” should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples.
Each of the methods disclosed herein may also be tangibly embodied (for example, in one or more computer-readable media as listed above) as one or more sets of instructions readable and/or executable by a machine including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). Thus, the present disclosure is not intended to be limited to the configurations shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure.
Claims
1-84. (canceled)
85. A method for a mobile device, the method comprising:
- in response to automatically detecting a start event indicator, processing first portion of audio input signal to obtain first information;
- determining at least one recording parameter based on the first information; and
- reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter.
86. (canceled)
87. The method according to claim 85, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the mobile device.
88. The method according to claim 85, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device.
89. The method according to claim 85, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration.
90. The method according to claim 85, wherein the first information is context information describing an environment in which the mobile device is recording.
91. The method according to claim 85, wherein the first information is context information describing a characteristic of the audio input signal.
92. The method according to claim 85, wherein the start event indicator is based on a signal transmitted over a wireless channel.
93-97. (canceled)
98. An apparatus for a mobile device, the apparatus comprising:
- an audio logging processor configured to: automatically detect a start event indicator; process first portion of audio input signal to obtain first information, in response to the detecting of the start event indicator; and determine at least one recording parameter based on the first information; and
- an audio capturing unit configured to reconfigure itself based on the determined at least one recording parameter.
99. (canceled)
100. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit.
101. The apparatus according to claim 98, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device.
102. The apparatus according to claim 98, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration.
103. The apparatus according to claim 98, wherein the first information is context information indicative of environment in which the mobile device is recording.
104. The apparatus according to claim 98, wherein the first information is context information indicative of a characteristic of the audio input signal.
105. The apparatus according to claim 98, wherein the start event indicator is based on a signal transmitted over a wireless channel.
106-110. (canceled)
111. An apparatus for a mobile device, the apparatus comprising:
- means for automatically detecting a start event indicator;
- means for processing first portion of audio input signal to obtain first information in response to detecting the start event indicator;
- means for determining at least one recording parameter based on the first information; and
- means for reconfiguring an audio capturing unit of the mobile device based on the determined at least one recording parameter.
112. (canceled)
113. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit.
114. The apparatus according to claim 111, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device.
115. The apparatus according to claim 111, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration.
116. The apparatus according to claim 111, wherein the first information is context information indicative of environment in which the mobile device is recording.
117. The apparatus according to claim 111, wherein the first information is context information indicative of a characteristic of the audio input signal.
118. The apparatus according to claim 111, wherein the start event indicator is based on a signal transmitted over a wireless channel.
119-123. (canceled)
124. A non-transitory computer-readable medium comprising instructions which when executed by a processor cause the processor to:
- automatically detect a start event indicator;
- process first portion of audio input signal to obtain first information in response to detecting the start event indicator;
- determine at least one recording parameter based on the first information; and
- reconfigure an audio capturing unit of the mobile device based on the determined at least one recording parameter.
125. (canceled)
126. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of a sampling frequency or a data width for an A/D converter of the audio capturing unit.
127. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes information indicative of the number of active microphone of the mobile device.
128. The computer-readable medium according to claim 124, wherein the at least one recording parameter includes timing information indicative of at least one microphone's wake up interval or active duration.
129. The computer-readable medium according to claim 124, wherein the first information is context information indicative of environment in which the mobile device is recording.
130. The computer-readable medium according to claim 124, wherein the first information is context information indicative of a characteristic of the audio input signal.
131. The computer-readable medium according to claim 124, wherein the start event indicator is based on a signal transmitted over a wireless channel.
132-136. (canceled)
Type: Application
Filed: Jul 17, 2015
Publication Date: Nov 12, 2015
Inventors: Te-Won Lee (SAN DIEGO, CA), Khaled Helmi El-Maleh (SAN MARCOS, CA), Heejong Yoo (SAN DIEGO, CA), Jongwon Shin (Gwangsan-gu)
Application Number: 14/802,088