User Dedicated Automatic Speech Recognition
A multi-mode voice controlled user interface is described. The user interface is adapted to conduct a speech dialog with one or more possible speakers and includes a broad listening mode which accepts speech inputs from the possible speakers without spatial filtering, and a selective listening mode which limits speech inputs to a specific speaker using spatial filtering. The user interface switches listening modes in response to one or more switching cues.
Latest NUANCE COMMUNICATIONS, INC. Patents:
- System and method for dynamic facial features for speaker recognition
- INTERACTIVE VOICE RESPONSE SYSTEMS HAVING IMAGE ANALYSIS
- GESTURAL PROMPTING BASED ON CONVERSATIONAL ARTIFICIAL INTELLIGENCE
- SPEECH DIALOG SYSTEM AND RECIPIROCITY ENFORCED NEURAL RELATIVE TRANSFER FUNCTION ESTIMATOR
- Automated clinical documentation system and method
The present invention relates to user interfaces for computer systems, and more specifically to a user dedicated, multi-mode, voice controlled interface using automatic speech recognition.
BACKGROUND ARTIn voice controlled devices, automatic speech recognition (ASR) typically is triggered using a push-to-talk (PTT) button. Pushing the PTT button makes the system respond to any spoken word inputs regardless of who uttered the speech. In distant talking applications such as voice controlled televisions or computer gaming consoles, the PTT button may be replaced by an activation word command. In addition, there may be more than one user that may potentially want to do voice control.
ASR systems typically are equipped with a signal preprocessor to cope with interference and noise. Often multiple microphones are used, particularly for distant talking interfaces where the speech enhancement algorithm is spatially steered towards the assumed direction of the speaker (beamforming) Consequently, interferences from other directions will be suppressed. This improves the ASR performance for the desired speaker, but decreases the ASR performance for others. Thus the ASR performance depends on the spatial position of the speaker relative to the microphone array and on the steering direction of the beamforming algorithm.
SUMMARYEmbodiments of the present invention are directed to a multi-mode voice controlled user interface for an automatic speech recognition (ASR) system employing at least one hardware implemented computer processor, and corresponding methods of using such an interface. The user interface is adapted to conduct a speech dialog with one or more possible speakers and includes a broad listening mode which accepts speech inputs from the possible speakers without spatial filtering, and a selective listening mode which limits speech inputs to a specific speaker using spatial filtering. The user interface switches listening modes in response to one or more switching cues.
The broad listening mode may use an associated broad mode recognition vocabulary and the selective listening mode uses a different associated selective mode recognition vocabulary. The switching cues may include one or more mode switching words from the speech inputs, one or more dialog states in the speech dialog, and/or one or more visual cues from the possible speakers. The selective listening mode may use acoustic speaker localization and/or image processing for the spatial filtering.
The user interface may operate in selective listening mode simultaneously in parallel for each of a plurality of selected speakers. In addition or alternatively, the interface may be adapted to operate in both listening modes in parallel, whereby the interface accepts speech inputs from any user in the room in the broad listening mode, and at the same time accepts speech inputs from only one selected speaker in the selective listening mode.
Embodiments of the present invention also include a device for automatic speech recognition (ASR) that includes a voice controlled user interface employing at least one hardware implemented computer processor. The user interface is adapted to conduct a speech dialog with one or more possible speakers. A user selection module is in communication with the user interface for limiting the user interface using spatial filtering based on image processing of the possible speakers so as to respond to speech inputs from only one specific speaker.
The spatial filtering may be further based on selective beamforming of multiple microphones. The user interface may be further adapted to provide visual feedback to indicate a direction of the specific speaker and/or the identity of the specific speaker. The image processing may include performing gesture recognition of visual images of the possible speakers and/or facial recognition of visual images of the faces of the possible speakers.
Embodiments of the present invention are directed towards user dedicated ASR which limits the voice control functionality to one selected user rather than to any user who happens to be in the vicinity. This may be based, for example, on a user speaking a special activation word that invokes the user limiting functionality. The system may then remain dedicated to the designated user until a specific dialog ends or some other mode switching event occurs. While operating in user dedicated mode, the system does not respond to any spoken inputs from other users (interfering speakers).
More particularly, embodiments of the present invention include a user-dedicated, multi-mode, voice-controlled interface using automatic speech recognition with two different kinds of listening modes: (1) a broad listening mode that responds to speech inputs from any user in any direction, and (2) a selective listening mode that limits speech inputs to a specific speaker in a specific location. The interface system can switch modes based on different switching cues: dialog-state, certain activation words, or visual gestures. The different listening modes may also use different recognition vocabularies, for example, a limited vocabulary in broad listening mode and a larger recognition vocabulary in selective listening mode. To limit the speech inputs to a specific speaker, the system may use acoustic speaker localization and/or video processing means to determine speaker position.
Embodiments of the present invention also include an arrangement for automatic speech recognition (ASR) which is dedicated to a specific user which does not respond to any other user. Potential users are detected by means of image processing using images from one or more cameras. Image processing may rely on detection of one or more user cues to determine and select the dedicated user, for example, gesture recognition, facial recognition, etc. Based on the results of such user selection, the steering direction of the acoustic spatial filter can be controlled, continuing to rely on ongoing visual information. User feedback (via a GUI) can be given to identify the direction and/or identity of the selected dedicated user, for example, to indicate the spatial steering direction of the system.
The spatial filtering of a specific speaker performed in selective listening mode may be based a combination of content information together with acoustic information, as shown in
As shown in
Depending on the listening mode, different acoustic models may be used in the ASR engine or even different ASR engines may be used. Either way, the ASR grammar needs to be switched when switching listening modes. For some number of multiple users M, there may either be N=M beams, N<M beams or N=1 beam used by the interface.
It may be useful for the interface to communicate to the specific speaker when the device is in selective listening mode and listening only to him. There are several different ways in which this can be done. For example, a visual display may show a schematic image of the room scene with user highlighting to identify the location of the selected specific speaker. Or more simply, a light bar display can be intensity coded to indicate that spatial direction of the selected specific speaker. Or an avatar may be used to deliver listening mode feedback as part of a dialog with the user(s).
For example, one useful application of the foregoing would be in the specific context of controlling a television or gaming console based on user dedicated ASR with broad and selective listening modes where potential users and their spatial positions are detected by means of one or more cameras. Initially, the interface system is in broad listening mode and potential user information is provided to a spatial voice activity detection process that checks speaker positions for voice activity. When the broad listening mode detects the mode switching cue, e.g. the activation word, the spatial voice activity detection process provides information about who provided that switching cue. The interface system then switches to selective listening mode by spatial filtering (beamforming and/or blind source separation) and dedicates/limits the ASR to that user. User feedback is also provided over a GUI as to listening direction, and from then on the spatial position of the dedicated user is followed by the one or more cameras. A mode transition back to broad listening mode may depend on dialog state or another switching cue.
Embodiments of the invention may be implemented in whole or in part in any conventional computer programming language such as VHDL, SystemC, Verilog, ASM, etc. Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented in whole or in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.
Claims
1. A device for automatic speech recognition (ASR) comprising:
- a multi-mode voice controlled user interface employing at least one hardware implemented computer processor, wherein the user interface is adapted to conduct a speech dialog with one or more possible speakers and includes: a broad listening mode which accepts speech inputs from the possible speakers without spatial filtering; and a selective listening mode which limits speech inputs to a specific speaker using spatial filtering;
- wherein the user interface switches listening modes in response to one or more switching cues.
2. A device according to claim 1, wherein the broad listening mode uses an associated broad mode recognition vocabulary and the selective listening mode uses a different associated selective mode recognition vocabulary.
3. A device according to claim 1, wherein the switching cues include one or more mode switching words from the speech inputs.
4. A device according to claim 1, wherein the switching cues include one or more dialog states in the speech dialog.
5. A device according to claim 1, wherein the switching cues include one or more visual cues from the possible speakers.
6. A device according to claim 1, wherein the selective listening mode uses acoustic speaker localization for the spatial filtering.
7. A device according to claim 1, wherein the selective listening mode uses image processing for the spatial filtering.
8. A device according to claim 1, wherein the user interface operates in selective listening mode simultaneously in parallel for each of a plurality of selected speakers.
9. A device according to claim 1, wherein the interface is adapted to operate in both listening modes in parallel, whereby the interface accepts speech inputs from any user in the room in the broad listening mode, and at the same time accepts speech inputs from only one selected speaker in the selective listening mode.
10. A computer program product encoded in a non-transitory computer-readable medium for operating an automatic speech recognition (ASR) system, the product comprising:
- program code for conducting a speech dialog with one or more possible speakers via a multi-mode voice controlled user interface adapted to: accept speech inputs from the possible speakers in a broad listening mode without spatial filtering; and limit speech inputs to a specific speaker in a selective listening mode using spatial filtering;
- wherein the user interface switches listening modes in response to one or more switching cues.
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. (canceled)
17. (canceled)
18. A method for automatic speech recognition (ASR) comprising:
- employing a multi-mode voice controlled user interface having a computer processor to conduct a speech dialog with one or more possible speakers by:
- employing a broad listening mode which accepts speech inputs from the possible speakers without spatial filtering; and
- employing a selective listening mode which limits speech inputs to a specific speaker using spatial filtering;
- wherein the user interface switches listening modes in response to one or more switching cues.
19. The method according to claim 18, wherein the broad listening mode uses an associated broad mode recognition vocabulary and the selective listening mode uses a different associated selective mode recognition vocabulary.
20. The method according to claim 18, wherein the switching cues include one or more mode switching words from the speech inputs.
21. The method according to claim 18, wherein the switching cues include one or more dialog states in the speech dialog.
22. The method according to claim 18, wherein the switching cues include one or more visual cues from the possible speakers.
23. The method according to claim 18, wherein the selective listening mode includes using acoustic speaker localization for the spatial filtering.
24. The method according to claim 18, wherein the selective listening mode includes using image processing for the spatial filtering.
25. The method according to claim 18, wherein the user interface operates in selective listening mode simultaneously in parallel for each of a plurality of selected speakers.
26. The method according to claim 18, wherein the user interface operates in both listening modes in parallel, such that the interface accepts speech inputs from any user in the room in the broad listening mode, and at the same time accepts speech inputs from only one selected speaker in the selective listening mode.
Type: Application
Filed: Mar 16, 2012
Publication Date: Feb 12, 2015
Applicant: NUANCE COMMUNICATIONS, INC. (Burlington, MA)
Inventors: Tobias Wolff (Neu-Ulm), Markus Buck (Biberach), Tim Haulick (Blaubeuren), Suhadi (Ulm)
Application Number: 14/382,839
International Classification: G10L 15/22 (20060101); G10L 25/51 (20060101); G06F 3/16 (20060101);