Speech recognition interface for voice actuation of legacy systems
Methods and apparatus are disclosed for a technician to access a systems interface to back-end legacy systems by voice input commands to a speech recognition module. Generally, a user logs a computer into a systems interface which permits access to back-end legacy systems. Preferably, the systems interface includes a first server with middleware for managing the protocol interface. Preferably, the systems interface includes a second server for receiving requests and generating legacy transactions. After the computer is logged-on, a request for voice input is made. A speech recognition module is launched or otherwise activated. The user inputs voice commands that are processed to convert them to commands and text that can be recognized by the client software. The client software formats the requests and forwards them to the systems interface in order to retrieve the requested information.
Latest AT&T Patents:
- FORWARD COMPATIBLE NEW RADIO SIDELINK SLOT FORMAT SIGNALLING
- HOMOGLYPH ATTACK DETECTION
- METHODS, SYSTEMS, AND DEVICES FOR MASKING CONTENT TO OBFUSCATE AN IDENTITY OF A USER OF A MOBILE DEVICE
- CUSTOMIZABLE AND LOW-LATENCY ARCHITECTURE FOR CELLULAR CORE NETWORKS
- LOCATION AWARE ASSIGNMENT OF RESOURCES FOR PUSH TO TRANSFER (PTT) COMMUNICATION SYSTEMS IN A FIFTH GENERATION (5G) NETWORK OR OTHER NEXT GENERATION WIRELESS COMMUNICATION SYSTEM
This application is continuation of U.S. patent application Ser. No. 09/892,664, filed Jun. 28, 2001, the contents of which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTIONThe present invention relates generally to the field of telecommunications dispatching and, more particularly, to methods and apparatus for providing a user-friendly input to a systems interface to legacy systems.
BACKGROUND OF THE INVENTIONTelecommunications technicians, such as so-called “Installation and Maintenance” (I&M) technicians, may visit customer sites to install new equipment, set up new services, or to service existing equipment or services. Frequently, I&M technicians need certain information stored on back-end legacy systems to perform a job order. The legacy systems may include dispatch, facilities, and billing systems. The legacy systems may store information such as job assignment, facility assignment, network testing functions, maintenance processes, and customer-specific information.
A typical telephone company, for example, can have a number of such back-end legacy systems that I&M technicians may need to access to perform job orders. These legacy systems can include, for example, a loop facility assignment control system, a loop maintenance operations system, a computer system for mainframe operations, a mechanized loop testing system, a secure network element contract server, a mechanized time reporting system, and a work activity statistical sampling plan system. Other back-end legacy and non-legacy systems may exist.
According to a previous approach, technicians could access some information from certain legacy systems using a gateway. For example, BellSouth's I&M technicians use the BellSouth Craft Access System (CAS) gateway. However, access using the gateway had significant limitations. The access was dial-up, meaning that the technician often had to tie up a customer's phone line. Additionally, logging-in and accessing data using CAS was a slow, laborious process. For example, CAS access was sometimes provided via Computer Access Terminals (CATs) using a slow 1.2 kbps connection. Because of the drawbacks of CAS, such as its slow speed, the provision of services by I&M technicians was suboptimal because of delays and impediments to getting the information necessary to complete a job order. In addition, CAS was very difficult to improve. The platform did not lend itself to creating new capabilities due to limited memory and low processor speed.
Moreover, I&M technicians often have poor or suboptimal keyboard skills. Technicians seeking information from legacy system interfaces such as CAS sometimes make mistakes in entering their requests into a computer keyboard. These mistakes can generate user errors that may result in the retrieval of incorrect information or that may result in the delayed retrieval of the correct information. Such user errors can increase the amount of time it takes I&M technicians to complete their job orders. This can increase costs and lead to customer dissatisfaction.
SUMMARY OF THE INVENTIONThe subject matter presented herein is generally directed to a system and method for permitting a user, such as a technician, to access information stored on back-end legacy systems that substantially obviates one or more of the problems due to the limitations and disadvantages of the prior art.
In an embodiment, a user logs a computer onto a systems interface which permits access to back-end legacy systems. The computer is running client software to access the systems interface. Preferably, the client software is application-specific software such as the “TECHNET client application.” Preferably, the systems interface includes a first server with middleware for managing the protocol interface. Preferably, the systems interface includes a second server for receiving requests and generating legacy transactions. Preferably, the systems interface responds to the log-on attempt by presenting an introductory home page, such as a “TECHNET home page.” After the computer is logged-on, a request for voice input is made. Preferably, the request results from a technician selecting an icon or engaging a software button on the home page. In response to the request for voice input, a voice recognition module is launched or otherwise activated.
For a system intended to be used by a technician operated entirely by voice, after the power switch is turned on, voice commands such as, for example, “Start Technet”, “login BYNKC2F” (common user ID), “password 123abc,” can be used to logon and get to Technet home page.
The user inputs voice commands that are processed to select operations and to complete data fields recognized by the client software. The client software formats the requests and forwards them to the systems interface in order to retrieve the requested information.
The advantages of the present invention are numerous. For example, the invention can permit a user, such as a technician, to remotely access information from back-end legacy systems using a voice input to a systems interface instead of making entries on a keyboard. The voice input to the computer can be made over a wireline or over a wireless communications network. Using a voice input to the systems interface can reduce difficulties arising from keyboard entry errors. Using a voice input can also free up the technician to tend to other tasks and thereby improve his efficiency. Using a voice input can reduce costs and improve customer satisfaction.
Computer 100 is a computer used by a technician or other service person in order to access information from back-end legacy systems 140. As used herein, “computer” is used in the broadest sense of the term. A “computer” may be a microcomputer, minicomputer, laptop, personal data assistant, cellular phone, two-way pager, processor, or any computerized device capable of transmitting and receiving data over a shared network. Preferably, computer 100 is a ruggedized laptop computer.
Computer 100 remotely accesses systems interface 130 through communications network 120. Communications network 120 may be any communications network that permits a remote computer to access a remote server. Communications network 120 could be a wireline network, wireless or cellular network, satellite network, and so forth. Preferably, communications network 120 is a Public Switched Telephone Network (PSTN). For example, communications network 120 can be BellSouth Communications Network (BSCN). Alternatively, communications network 120 can be a wireless communications network such as Cingular Wireless Network.
Systems interface 130 provides a systems interface between remote (and preferably portable, e.g., a ruggedized laptop computer) computers 100 seeking data from back-end legacy systems 140. Legacy systems 140 are generally mainframe-type computer systems that maintain data for a company. According to an embodiment, legacy systems 140 may include one or more of the following systems: a loop facility assignment control system; a loop maintenance operations system; a computer system for mainframe operations; a mechanized loop testing system; a secure network element contract server; a mechanized time reporting system; and a work activity statistical sampling plan.
The general operation of the system of
In accordance with
Generally, transaction servers 250 provide an interface to back-end legacy systems 270 so that responsive information can be retrieved. Transaction servers 250 may service requests, generate legacy transactions in response to those requests, and receive responsive information to be forwarded back to protocol servers 240.
The preferred systems interface (e.g., elements 240, 250 and 260 of
In particular, the preferred protocol servers 240 are disclosed in the '815 application as the “protocol servers,” which may include any associated hardware and/or software disclosed in connection therewith. The preferred transaction servers 250 are disclosed in the '815 application as the “TechNet servers,” which may include any associated hardware and/or software disclosed in connection therewith.
Protocol server 240 and transaction server 250 are depicted in
Audio input device 280 is a device for receiving voice input or other audio input into computer 200. Preferably, audio input device 280 is a microphone that is integrated into computer 200. According to an embodiment, audio input device 280 is a commercially available microphone designed for speech recognition applications, such as a microphone made by Andrea Electronics, Shure Brothers, SSCS, Inc., Telex, Labtec, Norcom Electronics, or VXI Corp. Other commercially available microphones could be used.
Computer 200 is a remote and preferably portable computer used by a technician. Computer 200 may be any of the devices discussed above for computer 100 (
Communications network 230 may be a wireline communications network, preferably a PSTN. Alternatively, or in addition, communications network 230 may be a wireless or cellular communications network.
Generally, modem 220 and communications network 230 can support transmission rates in the range of about 2-56 kilobits per second, depending on whether the communications link is a wireline link or a wireless link.
Legacy systems 270 is similar to and include any of the variations discussed in connection with legacy systems 140 of
The general operation of the preferred system of
Computer 200 can run a client application for accessing the systems interface. Preferably, the computer is running the client application disclosed in the '815 application as the “TECHNET client application.” The client application includes a client graphical user interface (GUI) layer that provides the user interface for receiving requests for information from the user, displaying information retrieved from the back-end systems, and other user interface tasks.
Upon log-on, a primary screen or primary home page may be presented to the user of computer 200 by the client GUI. Preferably, this primary home page is a “TECHNET home page” that presents various options for requesting information (referred to herein as “operations”) from legacy systems. A technician may make the requests by keyboard entry of inputs to the client GUI. For example, the technician may select a particular operation (e.g., “Retrieve Client Account Information”) using a mouse or like pointing device. The technician may then type in the information for a client (e.g., name, address, phone number, account number, and/or social security number, etc.) into client GUI-provided data fields. The client software formats the inputs into requests that are forwarded to the systems interface in order to retrieve the desired information. Other requests for information from the various legacy systems 270 can be initiated by typing in the appropriate information.
Alternatively, the user can opt to make the requests by voice input. After the computer is logged-on, the user makes a request for voice input. Preferably, the request for voice input is made by a technician speaking to the computer to start the application. When the PC is turned on, a speech recognition application is launched or otherwise activated. The user inputs speech commands into audio input device 280 that are processed by the voice recognition application in order to select particular operations and to complete the data fields for those operations. The speech recognition application and the client software are integrated so that voice input selection of a particular operation results in that operation being selected by the client GUI. The two software applications are also integrated so that voice input of information (for the selected operation) is used to complete the data fields presented by the client GUI. As before, the client software formats the inputs into requests that are forwarded to the systems interface in order to retrieve the desired information.
In the embodiment of
Operating system 320 may be a conventional operating system program. One example of suitable operating system is Microsoft Windows 98. Operating system 320 may be any of the other operating systems disclosed in the '815 application.
Client application 310 is the software application that runs on computer 200 to permit the access to legacy systems (e.g., the legacy systems 270 of
Speech recognition module 305 is the speech recognition software application that runs on computer 200 in order permit the user to present voice input requests to voice input device 280 (
For example, a home page displayed to a user by the client GUI may present options for several operations (e.g., “Get a Job,” “Retrieve Client Account Information,” and so forth). Speech recognition module 305 is programmed to recognize verbal utterances of these operations, which are translated into a command that selects the corresponding operation in much the same way as if the user had selected the operation with a pointing device.
For example, after selecting a particular operation (e.g., “Retrieve Client Account Information”), the client GUI may present a page with data fields to be completed. Speech recognition module 305 is programmed to translate verbal inputs into text that is used to complete these data fields.
According to one embodiment, speech recognition module 305 has a series of submodules or subcomponents tailored to the various pages that may be presented by the client GUI. For example, the client GUI may be adapted to present a home page presenting several operations, as well as a series of linked pages corresponding to each operation. Speech recognition module 305 may tailor its speech recognition processing according to the active/page presented by the client GUI. For example, there may be a separate vocabulary accessed by voice recognition module 305 for each of the pages.
Design and coding of voice recognition module 305 is well within the skill of the ordinary artisan. Currently, a number of speech recognition software applications are commercially available. Such a commercially available application could be programmed or modified or adapted to perform the specific speech recognition operations of the present invention.
Commercially available speech recognition applications include those sold by United Research Labs (e.g., VOICEACTION), Dragon Systems (e.g., NATURALLYSPEAKING and DRAGONDICTATE), IBM (VIAVOICE, Natural Language Understanding, voice-enabled forms technology see Lernout & Hauspie (e.g., Voice Xpress), Phillips Speech Processing (e.g., FREESPEECH98), Verbex Voice Systems (e.g., Listen for Windows). Preferably, voice recognition module 305 is based on the Nuance 7.0 product sold by Nuance Communications of Menlo Park, Calif. The suite of Nuance products includes API interfaces for various programming languages to enable the development of voice-driven applications. Alternatively, voice recognition module 305 can be coded as a custom application.
In step 405 the user logs onto the systems interface. For example, a technician using computer 200 (
In step 410, a speech recognition application is launched or otherwise activated. For example, the voice recognition application can be launched as part of the computer start-up process. Alternatively, the speech recognition module 305 of
In step 415, voice inputs are received. For example, speech recognition module 305 (
In step 420, the voice inputs are converted to a request that can be processed by the systems interface. Step 420 may comprise speech recognition module 305 processing the voice inputs in order to correlate or interpret them in accordance with a client GUI. As discussed above for
In step 425, the request is sent to the systems interface (e.g., protocol server 240 and transaction server 250 of
In step 430, the requested data is received. For example, the requested data may be received at the client application 310 (
In step 435, the received data is displayed on the GUI.
In step 440, the user may ask for a voice readout of the received data.
The foregoing disclosure of the preferred embodiments of the present invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many variations and modifications of the embodiments described herein will be obvious to one of ordinary skill in the art in light of the above disclosure. The scope of the invention is to be defined only by the claims appended hereto, and by their equivalents.
Further, in describing representative embodiments of the present invention, the specification may have presented the method and/or process of the present invention as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention.
Claims
1. An apparatus comprising:
- a processor; and
- a computer-readable storage medium having instruction stored which, when executed by the processor, result in the processor performing operations comprising: presenting a visual prompt to a user; receiving, in response to the visual prompt, a multimodal request from the user to perform a transaction, wherein: the multimodal request has a speech portion that is recognized by an input device using a separate vocabulary for each page linked to each operation; the multimodal request has a second portion comprising a cursor input; and the multimodal request comprises input to: navigate to a search page; retrieve client account information; and navigate among a plurality of data fields using a TAB command; identifying a legacy system with which performing the transaction requires communication; establishing a connection with the legacy system; translating the multimodal request to a format compatible with the legacy system, to yield a translated multimedia request; and submitting the translated multimodal request to the legacy system.
2. The apparatus of claim 1, wherein the connection with the legacy system is over a wireless communications network.
3. The apparatus of claim 1, wherein the connection with the legacy system is over a wireline communications network.
4. The apparatus of claim 1, wherein the transaction comprises a plurality of operations in a plurality of pages associated with a graphical user interface.
5. The apparatus of claim 4, wherein a plurality of vocabularies, each corresponding to the plurality of operations, comprises the separate vocabulary for each operation.
6. A method comprising:
- presenting a visual prompt to a user;
- in response to the visual prompt, receiving a multimodal request from the user to perform a transaction, wherein: the multimodal request has a speech portion that is recognized by an input device using a separate vocabulary for each page linked to each operation; the multimodal request has a second portion comprising a cursor input; and the multimodal request comprises input to: navigate to a search page; retrieve client account information; and navigate among a plurality of data fields using a TAB command;
- identifying, via a processor, a legacy system with which performing the transaction requires communication;
- establishing a connection with the legacy system;
- translating the multimodal request to a format compatible with the legacy system, to yield a translated multimedia request; and
- submitting the translated multimodal request to the legacy system.
7. The method of claim 6, wherein the connection with the legacy system is over a wireless communications network.
8. The method of claim 6, wherein the connection with the legacy system is over a wireline communications network.
9. The method of claim 6, wherein the transaction comprises a plurality of operations in a plurality of pages associated with a graphical user interface.
10. The method of claim 9, wherein a plurality of vocabularies, each corresponding to the plurality of operations, comprises the separate vocabulary for each operation.
11. The method of claim 6, wherein the TAB command is an audible command.
12. The method of claim 6, wherein navigation among the plurality of data fields is further based on a Go To command.
13. The method of claim 12, wherein the Go To command is another audible command.
14. A computer-readable storage device having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising:
- presenting a visual prompt to a user;
- in response to the visual prompt, receiving a multimodal request from the user to perform a transaction, wherein: the multimodal request has a speech portion that is recognized by an input device using a separate vocabulary for each page linked to each operation; the multimodal request has a second portion comprising a cursor input; and the multimodal request comprises multimodal input to: navigate to a search page; retrieve client account information; and navigate among a plurality of data fields using a TAB command;
- identifying a legacy system with which performing the transaction requires communication;
- establishing a connection with the legacy system;
- translating the multimodal request to a format compatible with the legacy system, to yield a translated multimedia request; and
- submitting the translated multimodal request to the legacy system.
15. The computer-readable storage device of claim 14, wherein the connection with the legacy system is over a wireless communications network.
16. The computer-readable storage device of claim 14, wherein the connection with the legacy system is over a wireline communications network.
17. The computer-readable storage device of claim 14, wherein the transaction comprises a plurality of operations in a plurality of pages associated with a graphical user interface.
18. The computer-readable storage device of claim 17, wherein a plurality of vocabularies, each corresponding to the plurality of operations, comprises the separate vocabulary for each operation.
4776016 | October 4, 1988 | Hansen |
4905163 | February 27, 1990 | Garber et al. |
5038283 | August 6, 1991 | Caveney |
5359649 | October 25, 1994 | Rosu et al. |
5521958 | May 28, 1996 | Selig et al. |
5570283 | October 29, 1996 | Shoolery et al. |
5583937 | December 10, 1996 | Ullrich |
5623404 | April 22, 1997 | Collins et al. |
5664110 | September 2, 1997 | Green et al. |
5687212 | November 11, 1997 | Kinser, Jr. et al. |
5694323 | December 2, 1997 | Koropitzer et al. |
5696906 | December 9, 1997 | Peters |
5754830 | May 19, 1998 | Butts |
5799279 | August 25, 1998 | Gould et al. |
5805775 | September 8, 1998 | Eberman et al. |
5812977 | September 22, 1998 | Douglas |
5826270 | October 1998 | Rutkowski |
5848403 | December 8, 1998 | Gabriner et al. |
5857201 | January 5, 1999 | Wright et al. |
5873070 | February 16, 1999 | Bunte |
5884262 | March 16, 1999 | Wise |
5884266 | March 16, 1999 | Dvorak |
5896440 | April 20, 1999 | Reed et al. |
5909492 | June 1, 1999 | Payne |
5915001 | June 22, 1999 | Uppaluru |
5920846 | July 6, 1999 | Storch et al. |
5923735 | July 13, 1999 | Swartz et al. |
5931917 | August 3, 1999 | Nguyen et al. |
5953389 | September 14, 1999 | Pruett |
5960411 | September 28, 1999 | Hartman et al. |
5963940 | October 5, 1999 | Liddy et al. |
5970472 | October 19, 1999 | Allsop |
5974135 | October 26, 1999 | Breneman et al. |
6023683 | February 8, 2000 | Johnson et al. |
6023684 | February 8, 2000 | Pearson |
6026379 | February 15, 2000 | Haller et al. |
6035285 | March 7, 2000 | Schect |
6052785 | April 18, 2000 | Lin |
6058373 | May 2, 2000 | Blinn |
6080202 | June 27, 2000 | Strickland |
6081789 | June 27, 2000 | Purcell et al. |
6092192 | July 18, 2000 | Kanevsky |
6101472 | August 8, 2000 | Giangarra et al. |
6101510 | August 8, 2000 | Stone |
6115040 | September 5, 2000 | Bladow |
6115737 | September 5, 2000 | Ely |
6219644 | April 17, 2001 | VanBuskirk |
6223541 | May 1, 2001 | Farrag |
6226623 | May 1, 2001 | Schein |
6233541 | May 15, 2001 | Butts et al. |
6311178 | October 30, 2001 | Bi |
6324522 | November 27, 2001 | Peterson et al. |
6341271 | January 22, 2002 | Salvo et al. |
6366829 | April 2, 2002 | Wallace et al. |
6389337 | May 14, 2002 | Kolls et al. |
6396833 | May 28, 2002 | Zhang |
6445774 | September 3, 2002 | Kidder |
6473407 | October 29, 2002 | Ditmer |
6487278 | November 26, 2002 | Skladman et al. |
6539548 | March 25, 2003 | Hendricks |
6578005 | June 10, 2003 | Lesaint et al. |
6587822 | July 1, 2003 | Brown et al. |
6587851 | July 1, 2003 | Ditcharo |
6598167 | July 22, 2003 | Devine |
6636831 | October 21, 2003 | Profit et al. |
6671824 | December 30, 2003 | Hyland |
6724406 | April 20, 2004 | Kelley |
6736376 | May 18, 2004 | Delisle |
6738815 | May 18, 2004 | Willis et al. |
6763376 | July 13, 2004 | Devine |
6915258 | July 5, 2005 | Kontonassios |
6971072 | November 29, 2005 | Stein |
7111243 | September 19, 2006 | Ballard et al. |
7606712 | October 20, 2009 | Smith et al. |
20010016819 | August 23, 2001 | Kolls |
20020099642 | July 25, 2002 | Schwanki |
20020103906 | August 1, 2002 | Knight |
20020138656 | September 26, 2002 | Hickey |
20030006913 | January 9, 2003 | Joyce |
20030008661 | January 9, 2003 | Joyce |
20030023601 | January 30, 2003 | Fortier et al. |
20030046397 | March 6, 2003 | Trace |
20030125956 | July 3, 2003 | Lewis et al. |
20030139932 | July 24, 2003 | Shao |
20040014479 | January 22, 2004 | Milman |
20040209602 | October 21, 2004 | Joyce |
20050165626 | July 28, 2005 | Karpf |
20070185717 | August 9, 2007 | Bennett |
20090313026 | December 17, 2009 | Coffman et al. |
0 712 227 | May 1996 | EP |
2000112746 | April 2000 | JP |
- Office Action dated Jun. 14, 2006; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated Feb. 14, 2008; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated Jan. 15, 2004; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated Dec. 12, 2006; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated May 16, 2007; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated Jul. 1, 2008; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- Office Action dated Jan. 12, 2009; U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- U.S. Appl. No. 09/892,665, filed Jun. 28, 2001.
- “Return on Investment Model”, web.archive.org webpage of Norand.com, Feb. 6, 1998, pp. 1-5.
- “Norand-Payback”, web.archive.org webpage of Norand.com, Feb. 6, 1998, pp. 1-2.
- “Norand-Products”, web.archive.org webpage of Norand.com, Feb. 6, 1998, pp. 1-5.
- “Norand-Cse Study”, web.archive.org webpage of Norand.com, Feb. 1998, pp. 1-3.
- Haugen, Dyan L et al., “Scheduling to Improve Field Service Quality”, Summer 1999, Decision Sciences, 30, 3, ABI/INFORM Global, pp. 783-804.
- Freedman, David H, “Case Study: Gas Attack”, Aug. 25, 1997, Forbes ASAP Supplement, pp. 45-52, Dialog 01485548 01-36536.
- McCarthy, Vince, “The Transformation of the Public Network”, Mar. 20, 1995, Telephone, v228n12, pp. 88-100, Dialog 0100552196-54914.
- Paz, Noemi M et al., “Maintenance scheduling: Issues, results and research needs”, 1994, International Journal of Operations & Production Management, v14n8 pp. 47-69, Dialog 00956565 96-05958.
- Tseng, Paul et al., Best Practices and Design Tips:, Apr. 1999, Contracting Business, v56, n4, p2s84(1), Dialog 06305741 54514903.
- Jewett, Dale, “Service Call”, Jul. 21, 1997, Automotive News, p. 61, Dialog 05140584 47847132.
- Southerland, Randy, “Not your typical garbagemon”, Feb. 2000, Waste Age, 31, 2, ss50, Dialog 11970477 60110303.
- “The Luxury class Database”, Apr. 3, 1995, InformationWeek, n 521, p100+, Dialog 00593604 23168942.
- Ivey, Mike, “Cub: Toughen Penalties vs. Ameritech”, Sep. 7, 2000, Madison Capital Times, Madison, Wisconsin, p. 1E, ProQuest ID 59714682.
- Jaffe, Charles, “Gas Supplier takes timing seriously if delivers are late, the product is free”, Feb. 5, 1989, The Morning Call, Allentown, PA, p. D.01, ProQuest ID 92485879.
- Lister, Karen, “Improvements cited in Portland cable service”, Jul. 21, 1995, Corpus Christi Caller Times, Corpus Christi, TX, Sec. B, p. 2 ProQuest 6892257.
- “Norand-Field Service”, web.archive.org webpage of Norand.com, Feb. 6, 1998, pp. 1-2.
- “Norand-Training”, web.archive.org webpage of Norand.com, Feb. 6, 1989, pp. 1-2.
- Wood, Michael, “Fighting the paperwork nemesis”, Mar. 1996, American Gas, v78n2, pp. 32-33, Dialog 01179245 98-28640.
- “Powering better customer service (Boston Edison implements mobile computing solution)”, May 1997, Communication News, v34, 5n, p50(2) Dialog 02070908 19414033.
- Kosseff, Jeffrey, “Service Delays May Lead to AT&T Fine”, Jul. 18, 2001, The Oregonian, Portland, OR, p. C.01 ProQuest ID 75527844.
- No author, “FieldCentrix Selects BellSouth's Intelligent Wireless Network to Bring Wireless Data Solution to Field Service Companies”, Aug. 24, 1999, BusinessWire, Dialog copy pp. 1-2.
- Morgan et al., “Multimodal User Interfaces in the Open Agent Architecture”, Proceedings of the 2nd International Conference on Intelligent User Interfaces, Aug. 1996.
- Conversay “Speech Technology Solutions”, White paper C 2001.
- Commercial Speech Recognition (www.tiac.net/users/rwilcox/speech.html).
- IBM Voice Systems (www-4.ibm.com/software/speech).
- IBM Voice Systems (www-4.ibm.com/software/speech/enterprise).
- IBM Voice Systems (www-4.ibm.com/software/speechenterprise/ms—0.html).
- Metroplex Voice Computing, Inc. (www.metroplexvoice.com).
- Metroplex Voice Computing, Inc. (www.metroplexvoice.com/toc.htm).
- Metroplex Voice Computing, Inc. (www.metroplexvoice.com/news.htm).
- Metroplex Voice Computing, Inc. (www metroplexvoice com/products htm).
- Metroplex Voice Computing, Inc. (www.metroplexvoice.com/medical—speech—recognition.htm).
- Voice Recognition Systems Home Page (www.iglou.com/vrsky/rightframe/htm).
- Lernout & Hauspie (www.lhsl.com/default2.htm).
- Lernout & Hauspie, “The Language of Business” (www.lhsl.com/business).
- Lernout & Hauspie, “Voice Xpress” (www.lhsl.com/voicexpress).
- TMA Associates (www.tmaa.com).
- TMA Associates, Editor's Notes (May 2000) (www.tmaa.com/voice—web.htm).
- Fonix (www.fonix.com).
- Dragon Systems Inc.: Dragon Naturally Speaking Preferred Edition (www.zdnet.com/pemag/features/speech/rev1.html).
- Nuance Verifier 2.0 (www.nuancecom.com/products/verifier.html).
- Nuance Vocalizer (www.nuancecom.com/products/vocalizer.html).
- Nuance 7.0 (www.nuancecom.com/products/nuance7.html).
- Nuance (www.nuancecom.com).
- Nuance Powers the Voice Web (www.nuancecom.com/partners/voiceweb.html).
- Nuance Product Offering (www.nuancecom.com/products/products.html).
- Nuance Voyager (www.nuancecom.com/products/voyager.html).
- Nuance Solutions (www.nuancecom.com/solutions/solutions.html).
- Nuance Enterprise Solutions (www.nuancecom.com/solutions/enterprise.html).
- Nuance Telecom Solutions (www.nuancecom.com/solutions/telecom.html).
- Nuance Internet Solutions (www.nuancecom.com/solutions/internet.html).
- Nuance Services (www.nuancecom.com/services/services.html).
- Nuance Speech Application Design and Development (www.nuancecom.com/services/speech—application—design.html).
- Delivering Speech Technologies with the Fonix FAAST® Framework, Part I (a white paper), Feb. 2001.
Type: Grant
Filed: Oct 1, 2009
Date of Patent: Oct 6, 2015
Patent Publication Number: 20100023332
Assignee: AT&T Intellectual Property I, L.P. (Atlanta, GA)
Inventors: Steven G. Smith (Roswell, GA), Ralph J. Mills (Atlanta, GA), Roland T. Morton, Jr. (Atlanta, GA), Mitchell E. Davis (Palmetto, GA)
Primary Examiner: Angela A Armstrong
Application Number: 12/571,779
International Classification: G10L 21/06 (20130101); G06F 3/16 (20060101); G10L 15/26 (20060101);