Method for determining the on-hold status in a call
A system and method is provided for detecting a hold status in a transaction between a waiting party and a queuing party. The system is adapted to use a preexisting cue profile database containing cue profile for a queuing party. A preexisting cue profile may be used for detecting a hold status in a call between a waiting party and a queuing party. The cue profile of the queuing party may include audio cues, text cues, and cue metadata. The transaction may be a telephone based, mobile-phone based, or internet based.
Latest FonCloud, Inc. Patents:
- System and method for replacing hold-time with a call-back in a contact center environment
- System and method for replacing hold-time with a call-back in a contact center environment
- System and Method for Omnichannel User Engagement and Response
- System and method for omnichannel user engagement and response
- System and Method for Omnichannel User Engagement and Response
This application claims priority from U.S. Provisional Patent Application Ser. No. 60/989,908 filed Nov. 23, 2007, the disclosure of which is herein incorporated by reference in its entirety.
FIELD OF INVENTIONVarious embodiments related to telephone-based or internet-based call transactions are presented.
BACKGROUNDIn telephone-based or internet-based communication, data, voice or sound (or a combination) is exchanged between parties on a call (typically two parties). Traditionally, businesses have utilized people to participate in telephone-based transactions with their clients. However, recently there are an increasing number of transactions that use automated services and do not engage a person until a certain stage of the call. The embodiments presented herein, relate to such transactions.
SUMMARYThe present embodiments provides in one aspect, a system for detecting a hold status in a transaction between a waiting party and a queuing party, said system comprising a device adapted to use a preexisting cue profile database containing cue profile for at least one queuing party.
In another aspect, the present embodiments provide for the use of a preexisting cue profile for detecting a hold status in a call between a waiting party and a queuing party.
In another aspect, the present embodiments provide a method for detecting a hold status in a transaction between a waiting party and a queuing party, said method comprising using a preexisting cue profile database containing cue profile for at least one queuing party.
For a fuller understanding of the invention, reference is made to the following detailed description, taken in connection with the accompanying drawings illustrating various embodiments of the present invention, in which:
The embodiments and implementations described here are only exemplary. It will be appreciated by those skilled in the art that these embodiments may be practiced without certain specific details. In some instances however, certain obvious details have been eliminated to avoid obscuring inventive aspects the embodiments.
Embodiments presented herein relate to telephone-based (land or mobile) and internet-based call transactions. The words “transaction” and “call” are used throughout this application to indicate any type of telephone-based or internet based communication. It is also envisioned that such transactions could be made with a combination of telephone and internet-connected device.
In all such transactions, the client (normally, but not necessarily, the dialing party) is the waiting party or on-hold party who interacts with an automated telephone-based service (normally, but not necessarily, the receiver of the call) which is the queuing party or holding party (different from the on-hold party). The terms “waiting party” and “queuing party” are used throughout this application to indicate these parties, however, it could be appreciated by those skilled in the art that the scope of the embodiments given herein applies to any two parties engaged in such transactions.
During a typical transaction between a waiting party and a queuing party, the waiting party needs to take certain measures like pressing different buttons or saying certain phrases to proceed to different levels of the transaction. In addition, the waiting party may have to wait “on hold” for a duration, before being able to talk to an actual person. Any combination of the two is possible and is addressed in the embodiments given herein.
To understand one example, as shown in
It is desirable for the waiting party to find out when the hold status changes from an on-hold state to a live state by a method other than constantly listening and paying attention. Accordingly, different embodiments presented herein address the issue of “hold status detection”.
A “cue profile” of a company, in this disclosure, is referred to as all the information available about the queuing party hold status. In some embodiments presented herein, the preexisting cue profiles of different queuing parties are used to determine the hold status.
In some embodiments, the cue profile may contain the hold status “audio cues” which are used to detect the hold status for a particular queuing party. Audio cues are any audible cues that could bear information about the hold status. For instance, music, pre-recorded voice, silence, or any combination thereof could indicate an on-hold state. On the other hand, the voice of an actual person could indicate a live state. The event of transition from an on-hold state to a live state could be very subtle. For instance, the transition form a recorded message to a live agent speaking may not be accompanied by any distinguished audio message like a standard greeting. Nevertheless there are audio cues indicating the transition from an on-hold state to a live state. Such audio cues are called “transition audio cues”.
In some embodiments, certain preexisting data about a queuing party is used to determine the hold status. Such preexisting data is referred as “cue metadata”. For example, the cue metadata may indicate the sensitivity required for each cue in order to dependably identify it in the audio stream while avoiding false-positives. In these particular embodiments, combinations of hold status audio cues in combination with cue metadata are referred to as the cue profile.
Some embodiments described herein relate to finding the cue profile of a particular queuing party. In certain embodiments, the queuing party itself is used, at least partially, to provide cue metadata to create a cue profile. However, in other embodiments, the cooperation of the queuing party is not necessary.
In some embodiments, “dial-in profiling” is used to create a cue profile of a queuing party accessible through PSTN. The method used in these embodiments is an ordinary telephone connection as used by a typical waiting party.
Dial-in profiling is an iterative process that is done in order to figure out the hold status of a queuing party.
In certain cases, dial-in profiling, as described herein, could be the only means for creating a cue profile of a queuing party. In addition, dial-in profiling, according to some embodiments, could also be used to update, expand, or edit a previously created cue profile.
Audio cues could be stored in a standardized format (for example, MP3) and are of fixed time length, for instance two seconds. Another type of cue used in some embodiments is a text cue, which is stored in a standard format (for example ASCII) and is of fixed length (for example two syllables).
In some embodiments these two cues are used create a confidence score. Shown in
In one embodiment related to the case when the audio cues are not sufficient to detect the hold status, a verbal challenge is issued to the queuing party. A verbal challenge consists of a prerecorded message which is asked of the queuing party at specific instances. For example, one verbal challenge may be “is this a live person?” After a verbal challenge has been issued, a speech recognition engine determines whether there is any response from a live person to the verbal challenge. Based on this, a judgment is made as to the hold status.
Verbal challenges can also make use of DTMF tones. For example, the challenge could be “press 1 if you are a real human”. In this case, the audio processing system will be searching for the DTMF tones instead of an audio cue. If the queuing party is in a live state, it may send an unprompted DTMF tone down the line in order to send preemptive notification of the end-of-hold transition. In an order to handle this case the audio system is always listening to and detecting DTMF tones.
A typical apparatus built in accordance with some embodiments presented herein, is referred to as a “hold detection system” and it could comprise, inter alia, some of the following components:
-
- Audio processing system—for extracting audio clips from the phone call and preparing them for analysis by either the speech recognition engine or the audio pattern matching component.
- Speech recognition engine—for taking an audio sample and converting human speech to text.
- Audio pattern matching component—for taking an audio sample and comparing it to the relevant audio cues contained in a cue database.
- Cue processor component—for taking results from the speech recognition engine and audio pattern matching component and computing a confidence score for the hold status.
- Audio playback component—for playing pre-recorded audio for the verbal challenge.
- Cue profile database—for containing the cue profiles for one or more companies.
It should be noted that any number of the components mentioned above could be integrated into a single component, device. And it should be noted that any device capable of using preexisting cue profile database to determine the hold status in a call or transaction falls within the scope of the embodiments presented herein.
The embodiments presented herein address, inter alia, the following difficulties:
-
- Lack of formal signaling of the hold status in the telephone network.
- Hold status cues vary widely between companies.
- Hold status cues for a given company can change over time.
- Cues may not be sufficient to determine the end-of-hold transition.
- Companies do not make available any information about their cues.
It will be obvious to those skilled in the art that one may be able to envision alternative embodiments without departing from the scope and spirit of the embodiments presented herein.
As will be apparent to those skilled in the art, various modifications and adaptations of the structure described above are possible without departing from the present invention, the scope of which is defined in the appended claims.
Claims
1. A system for detecting a hold status in a transaction between a waiting party and a queuing party, the system comprising:
- a cue profile database containing at least one cue profile for at least one queuing party, the at least one cue profile including on-hold cues and transition audio cues of the queuing party; and
- a processor adapted to detect a hold status at least partially based on the at least one cue profile of the queuing party, wherein the system is independent of the queuing party.
2. The system of claim 1, wherein the cue profile of the queuing party comprises at least one of audio cues, cue metadata and text cues.
3. The system of claim 1, wherein the transaction is at least one of a telephone based, mobile-phone based, and internet based transaction.
4. The system of claim 1, wherein at least part of the cue profile is provided by the queuing party.
5. The system of claim 1, wherein the processor comprises, in combination, at least one of an audio processing system, a speech recognition engine, an audio pattern matching component and a cue processor component.
6. The system of claim 5, further comprising an audio playback component for playing pre-recorded audio used to perform a verbal challenge to detect a live person.
7. The system of claim 1, further comprising means to update the cue profile database after at least one of a certain period and a change in the cue profile.
8. The system of claim 1, further comprising means to use a verbal challenge to determine the hold status.
9. A method for detecting a hold status in a transaction between a waiting party and a queuing party, the method comprising:
- using a cue profile database containing at least one cue profile for at least one queuing party, the cue profile containing on-hold cues and transition audio cues; and
- detecting, by a processor, the hold status at least partially based on the cue profile, wherein the method is independent of the queuing party.
10. The method of claim 9, wherein the cue profile of the queuing party comprises at least one of audio cues, cue metadata and text cues.
11. The method of claim 9, wherein the transaction is at least one of a telephone based, mobile-phone based, and internet based transaction.
12. The method of claim 9, wherein at least part of the cue profile is provided by the queuing party.
13. The method of claim 9, wherein the method comprises, in combination, at least one of audio processing, speech recognition, audio pattern matching, and cue processing.
14. The method of claim 13, further comprising playing pre-recorded audio used to perform a verbal challenge to detect a live person.
15. The method of claim 9, wherein the method updates the cue profile database after at least one of a certain period and a change in the cue profile.
16. The method of claim 9, wherein the method uses a verbal challenge to determine the hold status.
4169217 | September 25, 1979 | Szanto et al. |
4228324 | October 14, 1980 | Rasmussen et al. |
4425479 | January 10, 1984 | Dubner et al. |
4731822 | March 15, 1988 | Berry, III et al. |
4834551 | May 30, 1989 | Katz |
4870680 | September 26, 1989 | Ohtsuka et al. |
5627884 | May 6, 1997 | Williams et al. |
5640448 | June 17, 1997 | Toyoshima |
5737393 | April 7, 1998 | Wolf |
5764746 | June 9, 1998 | Reichelt |
5802526 | September 1, 1998 | Fawcett et al. |
5822405 | October 13, 1998 | Astarabadi |
6031905 | February 29, 2000 | Furman et al. |
6049600 | April 11, 2000 | Nabkel et al. |
6104797 | August 15, 2000 | Nabkel et al. |
6122346 | September 19, 2000 | Grossman |
6141328 | October 31, 2000 | Nabkel et al. |
6195417 | February 27, 2001 | Dans |
6201855 | March 13, 2001 | Kennedy |
6501750 | December 31, 2002 | Shaffer et al. |
6512825 | January 28, 2003 | Lindholm et al. |
6563921 | May 13, 2003 | Williams et al. |
6584184 | June 24, 2003 | Nabkel et al. |
6594484 | July 15, 2003 | Hitchings, Jr. |
6643641 | November 4, 2003 | Snyder |
6674725 | January 6, 2004 | Nabkel et al. |
6684224 | January 27, 2004 | Meding et al. |
6694008 | February 17, 2004 | Mukherji et al. |
6724885 | April 20, 2004 | Deutsch et al. |
6754334 | June 22, 2004 | Williams et al. |
6757260 | June 29, 2004 | Pandit |
6763090 | July 13, 2004 | Che et al. |
6788770 | September 7, 2004 | Cook et al. |
6804342 | October 12, 2004 | Gadant |
6807274 | October 19, 2004 | Joseph et al. |
6813636 | November 2, 2004 | Bean et al. |
6836478 | December 28, 2004 | Huang et al. |
6850602 | February 1, 2005 | Chou |
6914962 | July 5, 2005 | Neary |
6920425 | July 19, 2005 | Will et al. |
6990524 | January 24, 2006 | Hymel |
6999944 | February 14, 2006 | Cook |
7027408 | April 11, 2006 | Nabkel et al. |
7027990 | April 11, 2006 | Sussman |
7065203 | June 20, 2006 | Huart et al. |
7092738 | August 15, 2006 | Creamer et al. |
7113987 | September 26, 2006 | Nabkel et al. |
7120244 | October 10, 2006 | Joseph et al. |
7130411 | October 31, 2006 | Brown et al. |
7136478 | November 14, 2006 | Brand et al. |
7174011 | February 6, 2007 | Kortum et al. |
7215759 | May 8, 2007 | Brown et al. |
7221753 | May 22, 2007 | Hutton et al. |
7228145 | June 5, 2007 | Burritt et al. |
7231035 | June 12, 2007 | Walker et al. |
7251252 | July 31, 2007 | Norby |
7315617 | January 1, 2008 | Shaffer et al. |
7324633 | January 29, 2008 | Gao et al. |
7349534 | March 25, 2008 | Joseph et al. |
7386101 | June 10, 2008 | Pugliese |
7414981 | August 19, 2008 | Jaramillo et al. |
7715531 | May 11, 2010 | Golding et al. |
8155276 | April 10, 2012 | Beauregard et al. |
8160209 | April 17, 2012 | Wang et al. |
8223929 | July 17, 2012 | Sylvain |
20030043990 | March 6, 2003 | Gutta |
20030112931 | June 19, 2003 | Brown et al. |
20040202309 | October 14, 2004 | Baggenstoss et al. |
20050069117 | March 31, 2005 | Fernandez |
20050147219 | July 7, 2005 | Comerford |
20050278177 | December 15, 2005 | Gottesman |
20060095564 | May 4, 2006 | Gissel et al. |
20060106613 | May 18, 2006 | Mills |
20060126803 | June 15, 2006 | Patel et al. |
20060245579 | November 2, 2006 | Bienfait et al. |
20060256949 | November 16, 2006 | Noble |
20070041564 | February 22, 2007 | Antilli |
20070071223 | March 29, 2007 | Lee et al. |
20070165608 | July 19, 2007 | Altberg et al. |
20070280460 | December 6, 2007 | Harris et al. |
20080039056 | February 14, 2008 | Mathews et al. |
20080144786 | June 19, 2008 | Wang et al. |
20080159495 | July 3, 2008 | Dahan |
20090149158 | June 11, 2009 | Goldfarb et al. |
20090154578 | June 18, 2009 | Prakash |
20090154678 | June 18, 2009 | Kewin et al. |
20100057456 | March 4, 2010 | Grigsby et al. |
20110103559 | May 5, 2011 | Andrews et al. |
1156649 | November 2001 | EP |
2001285493 | October 2001 | JP |
2004304770 | October 2004 | JP |
20040039586 | May 2004 | KR |
20040106487 | December 2004 | KR |
1020050002930 | January 2005 | KR |
Type: Grant
Filed: Nov 24, 2008
Date of Patent: Feb 23, 2016
Patent Publication Number: 20090136014
Assignee: FonCloud, Inc. (Toronto)
Inventors: Jason P. Bigue (Toronto), Shai Berger (Toronto), Michael J. Pultz (Toronto)
Primary Examiner: Sonia Gay
Application Number: 12/276,621