Smart Audio and Video Capture Systems for Data Processing Systems
A computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image, and at least one signal processing component coupled to the orientation detection device and to the multi-sensor system, wherein the processor is configured to modify the captured environmental input data based on the position information.
The present application is a continuation of U.S. patent application Ser. No. 13/323,157, filed Dec. 12, 2011 by Jiong Zhou, et. al., and entitled “Smart Audio and Video Capture Systems for Data Processing Systems,” which is incorporated herein by reference as if reproduced in its entirety.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENTNot applicable.
REFERENCE TO A MICROFICHE APPENDIXNot applicable.
BACKGROUNDDifferent manufacturers have provided different tablets into the consumers market, such as the products released since 2010. The tablets, also referred to as personal tablets, computer tablets, or pads, such as the iPad from Apple, are portable devices that offer several advantages in documentation, email, web surfing, social activities, and personal entertainment than other types of computing devices. Generally, a tablet has a sound recording system which enables the tablet to record sound, for example to enable voice communications or media applications. The digital data converted by a microphone in this recording system is used to perform various purposes, such as recognition, coding, and transmission. Since the sound environment includes noise. The recorded target sound in the microphone is enhanced or separated from noise in order to obtain clean sound. Some tablets may also have a three dimensional (3D) video camera feature, which can be used to implement 3D video conferencing with other tablet or device users.
SUMMARYIn one embodiment, the disclosure includes a computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image, and at least one signal processing component coupled to the orientation detection device and to the multi-sensor system, wherein the processor is configured to modify the captured environmental input data based on the position information.
In another embodiment, the disclosure includes a sound recording system comprising a direction of arrival (DOA) estimation component coupled to one or more microphones and configured to estimate DOA for a detected sound signal using received orientation information, a noise reduction component coupled to the DOA estimation component and configured to reduce noise in the detected sound signal using the DOA estimation, and a de-reverberation component coupled to the noise reduction component and the DOA estimation component and configured to remove reverberation effects in the detected sound signal using the DOA estimation.
In another embodiment, the disclosure includes a three-dimensional (3D) video capturing system comprising a camera configuration device coupled to at least two cameras and configured to arrange at least some of the cameras to properly capture one of a 3D video and a 3D image based on detected orientation information for the 3D video capturing system, and an orientation detection device coupled to the camera configuration device and configured to detect the orientation information.
In another embodiment, the disclosure includes a sound recording method implemented on a portable device, comprising detecting an orientation of the portable device, adjusting a microphone array device based on the detected orientation, recording a sound signal using the adjusted microphone array device, and estimating a direction of arrival (DOA) for the sound signal based on the detected orientation.
In another embodiment, the disclosure includes a three-dimensional (3D) video capturing method implemented on a portable device, comprising detecting an orientation of the portable device, configuring a plurality of cameras based on the detected orientation, and capturing a video or image using the configured cameras.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Emerging and future tablets may include advanced microphone arrays that may be integrated into the tablets to provide better recorded sound quality, e.g., with higher signal to noise ratio (SNR). The advanced microphone array devices may be used instead of currently used omni-directional (uni-directional) microphones for detecting target sounds. The microphone array may be more adaptable to the direction of the incoming sound, and hence may have better noise cancellation property. One approach to implement the microphone array may be to emphasize a target sound by using a phase difference of sound signals received by the microphones in the array based on a direction of a sound source and a distance between the microphones, and hence suppress noise. Different algorithms may be used to achieve this.
For example, to enhance the received sound signal, a Coherent Signal Subspace process which may implement a Multiple Signal Classification (MUSIC) algorithm may be used. This algorithm may require pre-estimating the signal direction, where the estimation error in the signal direction may substantially affect the final estimation of the process. Estimating the sound signal's DOA with sufficient accuracy may be needed for some applications, such as for teleconferencing system, human computer interface, and hearing aid. Such applications may involve DOA estimation of a sound source in a closed room. Hence, the presence of a significant amount of reverberation from different directions may substantially degrade the performance of the DOA estimation algorithm. There may be a need to obtain a more reliable pre-estimated DOA that locates a speaker in a reverberant room. Further, an improved estimated DOA may improve noise cancellation since the noise source may have a different direction than the target sound.
Another important scenario that may need attention is estimating or identifying the user's face position with respect to a tablet's 3D video camera system. For example, when the user participates in a 3D video conferencing with another user using the tablet, the user may not hold the tablet in a designated proper position or the orientation of the tablet may be unknown to the 3D video camera system. Current 3D video camera enabled tablets in the market may not have the ability to capture a correct 3D video or image when the tablet is not held in the proper position. A position aware system and a camera configuration system which uses position or orientation information to adaptively configure the 3D cameras of the system to capture correct 3D video/images may be needed.
Disclosed herein are systems and methods for allowing improved sound recording and 3D video/image capturing using tablets. The systems may be configured to detect and obtain the tablet's orientation or position information and use this information to enhance the performance of a sound recording sub-system and/or a 3D video capture sub-system in the tablet. The terms position information and orientation information are used herein interchangeably to indicate the orientation and/or tilting (e.g., in degrees) of the tablet, for instance with respect to a designated position, such as a horizontal alignment of the tablet. The systems may comprises an orientation detection device, a microphone adjusting device, a camera configuration device, a sub-system of sound recording, a sub-system of 3D video capturing, or combinations thereof. The orientation detection device may be used to generate position/orientation of the tablet, which may be used by the microphone adjusting device and/or the camera configuration device. The microphone adjusting device may use this information to adjust the sensing angle in the microphone(s) and align the angle to the direction of the target sound. The position/orientation information may also be used to implement signal processing schemes in the sound recording sub-system. The video configuration device may use this information to re-arrange the cameras for capturing video/image. The information may also be used to implement corresponding processes in the 3D video capturing sub-system to obtain the correct 3D video or image.
The tablet design 100 may have a relatively small thickness with respect to its width or length and a flat display screen (e.g., touch screen) on one side of the tablet 101. The top and bottom edges of the table 101 may be wider than the remaining (side) edges of the tablet 101. As such, the length or the top and bottom edges may correspond to the length of the tablet 101 and the length of the side edges may correspond to the width of the tablet 101. The display screen may comprise a substantial area of the total surface of the tablet 101. The tablet design 100 may also comprise a microphone 102, e.g., on one edge of the tablet 101 around the screen, and typically one or two cameras 104, e.g., on another edge of the tablet 101, as shown in
Typically, the audio recording system may be optimized according to one designated orientation of the tablet 101. For instance, the audio recording system may be optimized for an upright position of the tablet 101, as shown in
Similarly, the 3D video capturing system may be optimized according to a selected orientation of the tablet 101, such as the upright position of
Instead, to allow holding and operating the tablet 401 at different orientations, the tablet 401 may comprise improved sound recording and/or 3D video capturing systems (not shown). The improved sound recording/3D video capturing systems may process the sound/video appropriately at any orientation or positioning (tilting) of the tablet 401 based on position/orientation information of the tablet 401 while recording sound and/or capturing 3D video. The tablet 401 may comprise an orientation detection device (not shown) that is configured to detect the position information. The position information may be used by a sound recoding system to estimate DOA for the signal and process accordingly the sound recorded by the microphone 402. For example, the sound detected by only some of the microphones in the array selected based on the position information may be considered. Similarly, the position information may be used by a 3D video capturing system to filter and process the video/image captured by the cameras 404. For example, the video/image captured by only some of the cameras 404 selected based on the position information may be considered.
The orientation detection device may be configured to generate orientation information, position data, and/or angle data that may be used by a microphone adjusting device (not shown) and/or a video configuration device (not shown). The microphone adjusting device may be configured to select the microphones or steer the sensors in the microphone for sound processing consideration in the array based on the orientation information and may be part of the sound recording system. The video configuration device may be configured to select or arrange the cameras 404 (e.g., direct the sensors in the cameras) for video processing consideration based on the orientation information and may be part of the 3D video capturing system.
For example, when the tablet is rotated relative to the horizontal plane, a position detector in the orientation detection device may detect the relative position or tilt of the tablet 401 to the ground and generate the position information data accordingly. The position information data may be used in the microphone adjustment device. For instance, the microphone adjustment device may steer accordingly a maximum sensitivity angle of the microphone array, e.g., with respect to the face or mouth of the user and/or may pass this information to a signal processing device (not shown) to conduct the signal processing process on the collected sound signals by the microphone array. The signal processing device may be part of the sound recording system. The signal processing process may include noise reduction, de-reverberation, speech enhancement, and/or other sound enhancement processes. The position information data may also be used in a 3D video configuration device/system to conduct and configure at least a pair of cameras 404 for capturing 3D videos and images.
The microphones 501 may be two separate omni-directional microphones, two separate microphone arrays, or two microphones (sensors) in a microphone array. In other embodiments, the sound recording system 500 may comprise more than two separate microphones 501, e.g., on one or different edges of the tablet. The input to the signal processing device 502 may comprise collected sound signals from each of the microphones 501 and position information data from the microphone adjustment device 505. The orientation detection device 504 may comprise an accelerometer and/or orientation/rotation detection device configured to provide orientation/rotation information. The orientation/rotation information may be detected with respect to a designated position or orientation of the tablet, such as with respect to the horizontal plane. Additionally or alternatively, the orientation detection device 504 may comprise face/mouth recognition devices that may be used to estimate position/orientation information of the tablet with respect to the user.
The position information data from the orientation detection device 504 may be sent to the microphone adjustment device 505, which may be configured to steer a maximum sensitivity angle of the microphones 501 (or microphone arrays). The microphones 501 may be steered so that the mouth of the user is aligned within the maximum sensitivity angle, and thus better align detection with the direction of incoming sound signal and away from noise sources. Alternatively or additionally, the microphone adjustment device 505 may send the position information data to the signal processing device 502. The signal processing device 502 may implement noise reduction/de-reverberation processes using the position information data to obtain clean sound. Additionally, the signal processing device 502 may implement DOA estimation for sound, as described further below. The clean sound may then be sent to the additional processing component(s) 503, which may be configured to implement signal recognition, encoding, and/or transmission.
The DOA estimation block 603 may be configured to receive the collected sound possibly with noise from each microphone (e.g., microphones 501) and implement DOA based on received position information (e.g., from the orientation detection device 504 and/or the microphone adjustment device 505). The position information data may be used by the DOA estimation block 603 to estimate a DOA for the incoming sound signal. The DOA estimation may be achieved using DOA estimation algorithms, such as the MUSIC algorithm. The output of the DOA estimation block 603 (DOA estimation information) may be sent as input to each of the noise reduction block 601 and the de-reverberation block 602 to achieve improved noise reduction and de-reverberation, respectively, based on the DOA information. The collected signal from each of the microphones may also be sent to the noise reduction block 601, where the noise reduction process may be performed using the DOA information. The noise reduction block 601 may forward the processed signal to the de-reverberation block 602, which may further process the sound signal to cancel or reduce any reverberation effect in the sound using the DOA information, and then forward a clean sound as output.
The orientation detection device 701 may send the estimated position information data to the camera configuration device 702, which may be configured to select a correct or appropriate pair of cameras from the cameras 703-706, e.g., according to the position information. The cameras may be selected with the assumption that the user is sitting in front of the camera, which may be the typical scenario or most general case for tablet users. For example, if the tablet is rotated at about 90 degrees (as shown in
In some embodiments, the components described above may be implemented on any general-purpose computer system or smart device component with sufficient processing power, memory resources, and throughput capability to handle the necessary workload placed upon it.
The secondary storage 1004 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 1008 is not large enough to hold all working data. Secondary storage 1004 may be used to store programs that are loaded into RAM 1008 when such programs are selected for execution. The ROM 1006 is used to store instructions and perhaps data that are read during program execution. ROM 1006 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage 1004. The RAM 1008 is used to store volatile data and perhaps to store instructions. Access to both ROM 1006 and RAM 1008 is typically faster than to secondary storage 1004.
At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, Rl, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=Rl+k*(Ru−Rl), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
Claims
1. A method comprising:
- detecting an orientation of a portable device based on an indication of a rotational orientation or a tilt orientation of the portable device relative to a horizontal plane, wherein the portable device comprises a camera group comprising a plurality of pairs of cameras such that each camera pair is selectable to obtain a three-dimensional (3D) image or a 3D video; and
- selecting, by a processor of the portable device, a camera pair from the camera group to obtain the 3D image or the 3D video based on the detected orientation of the portable device.
2. The method of claim 1, further comprising capturing the 3D image or the 3D video with the camera pair selected based on the detected orientation of the portable device.
3. The method of claim 1, further comprising employing a signal processing component to modify the captured 3D image or the 3D video based on the detected orientation of the portable device.
4. The method of claim 1, wherein each of the pairs of cameras are located proximate to different edges of the portable device.
5. The method of claim 1, wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
6. The method of claim 1, further comprising obtaining the 3D image or the 3D video by filtering out data from one or more pairs of cameras that are not selected to obtain the 3D image or the 3D video.
7. The method of claim 1, further comprising obtaining the 3D image or the 3D video by capturing the 3D image or the 3D video with the selected camera pair.
8. The method of claim 7, further comprising:
- processing the obtained 3D image or the 3D video using a 3D image processing scheme; and
- transmitting the 3D image or the 3D video.
9. A portable device comprising:
- a camera group comprising a plurality of cameras pairs such that each camera pair is selectable to capture a three-dimensional (3D) image or a 3D video; and
- a processor coupled to the camera group and configured to: determine an orientation of the portable device based on a rotational orientation or a tilt orientation of the portable device with respect to a horizontal plane; and select a camera pair from the camera group to obtain the 3D image or the 3D video based on the determined orientation of the portable device.
10. The portable device of claim 9, wherein the processor is further configured to cause the pair of cameras selected based on the determined orientation of the portable device to capture the 3D image or the 3D video.
11. The portable device of claim 10, further comprising a signal processing component coupled to the processor and configured to modify the captured 3D image or the 3D video based on the determined orientation of the portable device.
12. The portable device of claim 9, wherein the processor is further configured to obtain the 3D image or the 3D video by filtering out data from one or more pairs of cameras that are not selected to obtain the 3D image or the 3D video.
13. The portable device of claim 9, wherein each of the pairs of cameras of the camera group are located proximate to different edges of the portable device.
14. The portable device of claim 9, wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
15. A non-transitory computer readable medium comprising a computer program product for use by a portable device, the computer program product comprising computer executable instructions stored on the non-transitory computer readable medium such that when executed by a processor cause the portable device to:
- detect an orientation of the portable device based on an indication of a rotational orientation or a tilt orientation of the portable device relative to a horizontal plane, wherein the portable device comprises a camera group comprising a plurality of pairs of cameras such that each camera pair is selectable to obtain a three-dimensional (3D) image or a 3D video; and
- select, by the processor of the portable device, a camera pair from the camera group to obtain the 3D image or the 3D video based on the detected orientation of the portable device.
16. The computer program product of claim 15, wherein the instructions further cause the portable device to capture the 3D image or the 3D video with the camera pair selected based on the detected orientation of the portable device.
17. The computer program product of claim 15, wherein the instructions further cause the portable device to modify the captured 3D image or the 3D video based on the detected orientation of the portable device.
18. The computer program product of claim 15, wherein each of the pairs of cameras are located proximate to different edges of the portable device.
19. The computer program product of claim 15, wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
20. The computer program product of claim 15, wherein the instructions further cause the portable device to employ the pair of cameras selected based on the determined orientation of the portable device to capture the 3D image or the 3D video.
Type: Application
Filed: Dec 14, 2015
Publication Date: Apr 7, 2016
Inventors: Jiong Zhou (Santa Clara, CA), Ton Kalker (Mountain View, CA)
Application Number: 14/968,225