Visual options for audio menu

- Amazon

Aspects of the disclosure provide replacement and/or augmentation of automated audio menus of automated communication platforms with interactive digital menus. A digital menu of options associated with an automated communication platform may be provided in response to a call from a communication device to the automated communication platform having an automated audio menu for interaction with such a platform. The digital menu can include options that correspond to options in the automated audio menu, and can be displayed at the communication device via interactive buttons or other actionable indicia. The digital menu also can include options representing shortcuts for specific responses to the automated communication platform and/or options for responses customized to the communication device and the automated communication platform. Shortcuts and/or customized options can be displayed at the communication device with indicia distinctive from other options corresponding to the automated audio menu.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
BACKGROUND

Businesses and other organizations generally rely on automated response telephony systems in order to serve a large segment of customers while containing costs. Such systems usually utilize automated audio menus or scripts that can be played back when a call is established between a calling device, such as a mobile phone, and an automated response telephony system. Playback of the automated audio menus typically provides the available menu options relatively slowly and sequentially. In addition, automated audio menus are sometimes altered, so a previously familiar selection via a touch-tone telephone (e.g., pressing number 1) directs the caller to a different branch of an automated menu tree than it did last time it was selected.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings form part of the disclosure and are incorporated into the subject specification. The drawings illustrate example embodiments of the disclosure and, in conjunction with the present description and claims, serve to explain at least in part various principles, features, or aspects of the disclosure. Certain embodiments of the disclosure are described more fully below with reference to the accompanying drawings. However, various aspects of the disclosure can be implemented in many different forms and should not be construed as limited to the implementations set forth herein. Like numbers refer to like, but not necessarily the same or identical, elements throughout.

FIG. 1 illustrates an example of an operational environment that can leverage digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIG. 2 illustrates an example of a communication device that can leverage digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIGS. 3A-3B illustrate examples of digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIG. 4 illustrates another example of a digital menu for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIG. 5 illustrates another example of a digital menu for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIGS. 6-7 illustrate examples of computing devices that can generate digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIG. 8 illustrates another example of a computing device that can leverage and/or generate digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure.

FIG. 9 illustrates an example of a method for replacing and/or augmenting an automated audio menu with an interactive digital menu in accordance with one or more aspects of the disclosure.

FIGS. 10-12 illustrate examples methods for producing a digital menu for replacement and/or augmentation of an automated audio menu in accordance with one or more aspects of the disclosure.

FIG. 13 illustrates an example of a method for producing and utilizing a digital menu for replacement and/or augmentation of an automated audio menu in accordance with one or more aspects of the disclosure.

DETAILED DESCRIPTION

The disclosure recognizes and addresses, in at least certain aspects, the limited utility and lack of convenience of automated audio menus presented by automated response telephony systems. More particularly, yet not exclusively, the disclosure recognizes the difficulty or inconvenience associated with interacting with a typical automated attendant or an interactive voice response (IVR) system via an automated audio menu. In accordance with certain aspects of this disclosure, interactive digital menus can provide visual options that can replace and/or augment automated audio menus of automated communication platforms. A digital menu of options associated with an automated communication platform may be provided in response to a call from a communication device to the automated communication platform having an automated audio menu for interaction with such a platform. The digital menu can include options that correspond to options in the automated audio menu. The options can be displayed at the communication device via interactive buttons or other actionable indicia. The digital menu also can include options customized to the communication device. For example, in addition or in the alternative, the digital menu of options can include options associated with shortcuts for specific responses to the automated communication platform and/or options for responses customized to the communication device and the automated communication platform. Shortcut options and/or customized options can be displayed at the communication device with indicia distinctive from other options corresponding to the automated audio menu. Actuation or selection of an option displayed at the communication device can cause the communication device to generate and/or transmit one or more tones (such as dual-tone multi-frequency (DTMF) signals). Other type of code information besides the one or more tones also can be utilized or leveraged, where actuation or selection of an option displayed at the communication device can cause the communication device to generate code information (e.g., numeric data, alphanumeric data, or the like). As such, actuation or selection of a displayed option can cause the communication to generate code information, e.g., a sequence of one or more tones, numeric data, alphanumeric data, or the like. The code information (e.g., tone(s), numeric data, alphanumeric data, or the like) so generated can correspond to the selected or actuated option.

In accordance with the present disclosure, communication devices that can utilize or otherwise leverage the interactive digital menus can generally include one or more processors and one or more memory devices; communication devices (e.g., a system bus, a memory bus, or the like); input/output interface(s) that can include display device(s); and/or a radio unit for wireless communication. More specifically, in one example, a communication device in accordance with this disclosure can be embodied in a tethered computing device or a portable computing device, such as a mobile tablet computer, an electronic-book reader (also referred to as an e-reader), a mobile telephone (e.g., a smartphone), and the like. In another example, the communication device can be embodied in or can comprise a wearable computing device, such as a watch, goggles or head-mounted visors, or the like. In yet another example, the communication device can be embodied in or can comprise portable consumer electronics equipment, such as a camera, a media reproduction device, a portable television set, a gaming console, a navigation device, a voice-over-internet-protocol (VoIP) telephone or two-way communication device, and the like. In additional or alternative embodiments, a combination of a communication device (e.g., a VoIP telephone) and a second device (e.g., a television set) can utilize the interactive digital menus of this disclosure, where the communication device and the second device can be synchronized so as to display or otherwise present a digital menu options at the second device (which may be referred to as a second screen).

With reference to the drawings, FIG. 1 illustrates an example operational environment 100 that can leverage digital menus that provide visual options for replacement and/or augmentation of automated audio menus of automated response telephony systems in accordance with one or more aspects of the disclosure. As illustrated, a communication device 110, which is represented for the sake of illustration with a wireless smartphone, can establish a call session with an automated communication platform 120 via a first communication pathway (herein referred to as “pathway I”). The call session can be embodied in or can include a voice call or a data session implemented using a cellular radio telecommunication protocol, VoIP protocols, or protocols suitable for videotelephony. The automated communication platform 120 can be embodied in or can include an interactive voice response (IVR) system and/or an automated attendant. As part of the communication call established between the communication device 110 and the automated communication platform 120, the automated communication platform 120 can communicate information indicative or otherwise representative of an automated audio menu of options associated with the available functionality of the automated communication platform 120. The communication device 110 can receive the information and can provide sound 114 representative of the automated audio menu. The first communication pathway can be established via at least one network of a group of one or more networks 130, and can utilize one or more telecommunication protocols (packet-switched, circuit switched, or a combination thereof) that can exchange of voice (natural and/or simulated), data, and/or signaling (e.g., tones, numeric codes, alphanumeric codes). As an example, the at least one network can include a cellular network or a portion thereof and an internet protocol (IP) multimedia subsystem (IMS) platform. The network(s) 130 can include wireless and/or wired communication networks having various footprints (e.g., a wide area network (WAN), a metropolitan area network (MAN), a local area network (LAN), a home area network (HAN), and/or a personal area network (PAN)).

In addition, the communication device 110 also can communicate, via a second communication pathway (herein referred to as “pathway II”), with a digital menu server 140. For example, the second communication pathway may be embodied in one or more out-of-band channels available for communication within the communication protocol utilized to establish the call between the communication device 110 and the digital menu server 140. In one embodiment, the communication device 110 can communicate or otherwise provide, via the second communication pathway, a telephone number (which is exemplified as “1-555-123-4567” in FIG. 1) associated with the automated communication platform 120. The communication device 110 can communicate (e.g., transmit) the telephone number to the digital menu server 140 at any time in relation to establishing the call with the automated communication platform 120. More specifically, the communication device 110 can transmit such a telephone number before, after, or substantially concurrently with initiating or otherwise establishing the call with the automated communication platform 120. It should be appreciated that while reference is made to a “telephone number” in the embodiments illustrated herein, the present disclosure is not so limited and other types of communication addresses are contemplated. As such, a communication address in accordance with this disclosure can be embodied in or can include a telephone number, a subscriber number, an international mobile subscriber identity (IMSI), an electronic serial number (ESN), an internet protocol (IP) address, a session initiation protocol (SIP) address, a media access control (MAC) address, and/or any other information that can be utilized or otherwise leveraged to identify a communication device and/or system with which to establish a communication link for the exchange of audio data, audio metadata, video data, video metadata, and/or signaling associated with such exchange.

The digital menu server 140 can receive the telephone number (or other type of communication address) associated with the automated communication platform 120. The telephone number may be compared with a set of one or more telephone numbers available (e.g., retained or otherwise stored) in one or more memory devices (which may be referred to as storage 150) that can be functionally coupled (e.g., communicatively coupled) to the digital menu server 140. In one scenario, the outcome of such a comparison can determine that the telephone number associated with the automated communication platform 120 is available in the storage 150 and, in response, the digital menu server 140 can query the storage 150 for a digital menu of options associated with the telephone number. As illustrated, the storage 150 can include one or more memory elements 154, such as register(s), file(s), database(s), or the like, that can contain one or more digital menus of options for known automated communication platforms (IVR systems, automated attendants, etc.). As such, the one or more memory elements 154 may be referred to as “menu(s) 154,” and can include information, such as data and/or metadata, indicative or otherwise representative of a digital menu of options associated with an automated audio menu of the automated communication platform 120. In certain implementations, the menu(s) 154 also can include a tone or a sequence of tones associated with each option in the digital menu of options. In other implementations, the menu(s) 154 also can include code information (e.g., numeric data or alphanumeric data) associated with each option in the digital menu of options. It should be appreciated that a sequence of tones can be associated with a deep option within the automated audio menu. Therefore, in response to the query, the digital menu server 140 can receive information indicative or otherwise representative of the digital menu of options (which also may be referred to as “digital menu”) for the automated communication platform 120. The digital menu server 140 can communicate (e.g., transmit) the information indicative or otherwise representative of such a digital menu to the communication device 110 via the second communication pathway.

In response to transmission of the telephone number (or other type of communication address) associated with the automated communication platform 120 and the availability of a digital menu for such a platform, the communication device 110 can receive the digital menu. The digital menu can be received as information (e.g., data, metadata, and/or signaling) that can be processed or otherwise consumed by the communication device 110. In one aspect, the communication device 110 can validate the received digital menu. An invalid digital menu can cause the communication device to present a message indicating the invalidity of the menu. In addition, when the received digital menu is valid, the communication device 110 can display or otherwise present the received digital menu for the automated communication platform 120. In certain embodiments, the communication device 110 can utilize the received information indicative or otherwise representative of the digital menu to render the digital menu without augmenting or otherwise modifying such information. The received information so rendered can be displayed by the communication device. In other embodiments, in order to display the digital menu, the communication device 110 can arrange or otherwise allocate the information indicative of the digital menu according to a template for displaying the digital menu at the communication device 110.

As described herein, the digital menu can include one or more options that are specific to the automated communication platform 120. Accordingly, in one example, the communication device 110 can present at least one (e.g., one, two, more than two, each) of the one or more options. One or more of the presented options can be displayed as actionable indicia, such as a soft button that can be actuated (e.g., touched, swiped, clicked, pressed, or the like). Therefore, the digital menu may be referred to as an interactive digital menu. In addition, one or more of the menu options that are displayed, such as options 180e and 180f, can be highlighted or otherwise presented differently from the other menu options that are displayed. A menu option can be highlighted by displaying the menu option in a font that is different than the font utilized to display other menu option(s). In addition or in the alternative, the menu option can be highlighted by displaying the menu option via graphical elements (e.g., color, images (such as icons), a combination thereof or the like) that are different from the graphical elements utilized to display other menu option(s). Further, or as another alternative, the menu option can be highlighted by including aural elements during or at certain times during the display of the highlighted menu option. For instance, the communication device 110 can playback a sound for a predetermined period, which may be short, to convey the availability of a particular menu options. The sound also may be played when an end-user interacts with the menu option, e.g., hovers pointing indicia over the menu option. It should be appreciated that a menu option may be highlighted in any fashion that permits emphasizing, visually or otherwise, the menu option relative to other displayed menu option(s).

Highlighted options can be associated, for example, with popular or otherwise special options or a combination of options available in the automated communication platform 120. A highlighted option can correspond to a popular option, such as a number (“9”) or a combination of a number and a sign (e.g., “*9”) to reach a live attendant, such as a representative and/or operator. In certain embodiments, the communication device 110 can parse information representative of the digital menu associated with the automated communication platform 120 for a keyword or phrase indicative of a live attendant option to reach at least one of a representative or an operator. After or upon identifying such a keyword or phrase, the communication device can determine numeric information (e.g., the digit “9”) or alphanumeric information (e.g., the combination of the “*” and the digit “9” in “*9”) representative or otherwise indicative of the live attendant option. In addition or in the alternative, the communication device 110 can display distinctively or otherwise highlight indicia (actionable or otherwise) to represent the live attendant option.

In addition or in the alternative, a highlighted option can correspond to option macro associated with a combination of numbers and/or signs that can provide certain information (e.g., date of birth and last four digits of a social security number) and/or can permit accessing certain chain of options (e.g., “Service and Parts” and “Tire Specialist”) available in the automated communication platform 120 (e.g., an automated attendant for a car dealership). As described herein, an option macro can include an option customized to the communication device 110 and the automated communication platform 120.

The communication device 110 also can display a greetings message 170 associated with the automated communication platform 120. For instance, the greetings message 170 can include indicia representative of a message such as “Welcome to Acme Inc. If you know your party's three-digit extension, you can enter it at any time.” In certain embodiments, the greetings message 170 can be embodied in or can include a form having one or more input fields for inputting certain information, such as a telephone extension number.

It should be appreciated that, in one example, the portion of the automated audio menu that is received within the call between the communication device 110 and the automated communication platform 120 is presented, via the sound 114, substantially concurrently or concurrently with the greetings message 170 and the options 180a-180f. The digital menu 160 can provide more and/or different interactive options than those conveyed in the received portion of the automated audio menu. In addition or in the alternative, the digital menu 160 can provide fewer and/or different interactive options than those conveyed in the received portion of the digital menu. As described herein, in certain embodiments, each of such options can be presented as actionable indicia or other selectable display options. Therefore, interaction with a displayed option can cause the communication device 110 to generate and/or transmit or otherwise communicate one tone (e.g., a DTMF tone) or a sequence of two or more tones (e.g., several DTMF tones) corresponding to the option that is actuated. In one example, actuation of the displayed option can cause the communication device 110 to communicate such one tone or sequence to the automated communication platform 120 via the communication pathway between the communication device 110 and the automated communication platform 120. In addition or in the alternative, as described herein, interaction with the displayed option can cause the communication device 110 to transmit or otherwise communicate other type of code information (e.g., numeric code(s) or alphanumeric code(s)) as digital signaling (e.g., digital packets) other than one or more tones. At least a portion of the code information can correspond to the displayed option, and such signaling can be encoded or otherwise formatted according to a protocol suitable for the automated communication platform 120 to decode or otherwise interpret. As such, regardless of the specific signaling (tone(s) or digital packets), interaction of an end-user of the communication device 110 with the automated communication platform 120 can be greatly simplified with respect to the typical interaction that relies on the automated audio menu presented as sound 114. As described in greater detail hereinafter, the options available in a digital menu can include forms or can be configured to receive information (e.g., data and/or metadata) associated with an option.

FIG. 2 illustrates an example embodiment of a communication device 200 that can utilize or otherwise leverage a digital menu for automated response telephony systems in accordance with one or more aspects of the disclosure. The communication device 200 can embody the communication device 110. As illustrated, the communication device 200 can include a communication unit 210 that can permit communication with an automated communication platform (e.g., an IVR system or an automated attendant), such as the automated communication platform 120. The communication unit 210 also can permit communication with a digital menu server, such as the digital menu server 140. Accordingly, the communication device 200 can receive, via the communication unit 210, information indicative of an automated audio menu for the automated communication platform. In one example, the communication device 200 also can transmit the telephone number (or other type of communication address) associated with the automated communication platform to the digital menu server, and can receive a suitable response from the digital menu server depending on whether a digital menu is available for the telephone number. More specifically, the digital menu server can determine that a digital menu is unavailable for the telephone number, and can communicate a message indicating unavailability of the digital menu to the communication device 200. In turn, the communication device 200 can receive such a message via the communication unit 210. In addition, the digital menu server can determine that a digital menu is available for the telephone number, and can communicate the digital menu associated with the automated communication platform to the communication device 200. In turn, the communication device 200 can receive such a digital menu via the communication unit 210. The digital menu that is received can be retained, for at least a certain period, in one or more memory devices 280 (referred to as memory 280). The communication unit 210 can permit wireless and/or wireline communication.

The communication device 200 can include one or more audio input units 230 and one or more audio output units 250. At least one of the audio input unit(s) 230 can include a microphone that can receive sound, such as an utterance, from an end-user of the communication device 200. At least one of the audio output unit(s) 250 can present the end-user with one or more options of the automated audio menu for the automated communication platform 120. To at least such an end, in one example, the audio output unit(s) 250 can include at least one piezoelectric speaker that can transmit audio (e.g., sound 114) to an end-user of the communication device 200.

In addition, the communication device 200 can include a display unit 220 that can present or otherwise display the information representative or otherwise indicative of the digital menu that can be received at the communication device 200. As described herein, the communication device 200 can validate the digital menu prior to rendering or displaying it. To at least such an end, the communication device 200 can include a digital-to-audio option unit 260 that can validate the digital menu in a number of ways. In one example, the digital-to-audio option unit 260 can compare audio input signal from the automated communication platform with a reference audio signal (e.g., a segment of a few seconds of digitized audio) that is expected to be received from such a platform. More specifically, the digital menu server described herein can communicate the reference audio signal in addition to the digital menu. The reference audio signal can correspond to the audio signal that is expected to be received from the automated communication platform after a certain time interval has elapsed since the communication device 200 has initiated a call therewith. In addition, the digital-to-audio option unit 260 can compare one or more frequencies of the audio input signal with one or more frequencies of the reference audio signal. A comparison that establishes that both such signals have the same or substantially the same frequency or frequencies can indicate that the digital menu is valid. In other embodiments, the digital-to-audio option unit 260 can include a speech-to-text unit that can convert to text a portion of audio input signal from the automated communication platform. The text so generated can be compared with at least a portion of the digital menu in order to establish that the digital menu is consistent with an automated audio menu of the automated communication platform. In a scenario in which the digital menu is invalid, the display unit 220 can present a message accordingly. For instance, the display unit 220 can display the following messages: “Digital Menu is Outdated,” “Audio-to-Digital Menu Service is Currently Unavailable,” or any other suitable message.

In a scenario in which the digital menu is valid, the display unit 220 can present, for example, the indicia representative of the greetings message 170 and the options 180a-180f in FIG. 1. In additional or alternative embodiments, the communication device 200 can utilize or otherwise leverage a display device of another device such as a television or a wireless, tethered, or wearable electronic device. The interactive options that can be presented by the display unit 220 and/or such a display device (which may be referred to as a second screen) can include actionable indicia. As described herein, it should be appreciated that certain options also can be presented or otherwise displayed without actionable indicia. In addition, in certain implementations, in order to display highlighted options, the digital-to-audio option unit 260 can parse the information indicative of otherwise representative of the digital menu for a predetermined keyword or phrase indicative of a popular option (e.g., a live attendant option), a popular option, or a customized option. After or upon identifying such a keyword or phrase, the digital-to-audio option unit 260 can determine specific information (e.g., numeric information or alphanumeric information) representative or otherwise indicative of the option to be highlighted. The specific information can be rendered or otherwise processed in order to include graphical elements and/or aural elements that permit the display unit 220 to display such option distinctively from other displayed options.

In certain implementations, the digital-to-audio option unit 260 can receive a selection of an option displayed by the communication device 200 and, in response, can cause the communication unit 210 to transmit a sequence of one or more tones or other type of code information corresponding to the selected option. In certain embodiments, the display unit 220 can include an input interface—such as a capacitive touchscreen or a resistive touchscreen combined with an input interface module—that can permit receiving input information indicative of the selection. The input interface module can be included in the one or more input/output (I/O) interface(s) 240. In addition, the I/O interface(s) 240 can include at least one port (serial, parallel, or both), at least one Ethernet port, at least one pin, and can permit communication of information between the communication device 200 and an external electronic device, such as another computing device (e.g., a remote network device or an end-user device). In one example, the communication device 200 can present an example digital menu 300, as shown in FIG. 3A. The digital menu 300 can be a digital menu associated with a financial institution, and can include actionable indicia 310 conveying that the number “1” corresponds to “Account Details.” The digital menu 300 also can include actionable indicia 320 that convey that the number “2” corresponds to “Payments,” and actionable indicia 330 that conveys that the number “3” corresponds to the option to “Report Fraud.” Actuation of any of the indicia 310, 320, or 330 indicates selection of the respective option, and the digital-to-audio option unit 260 can cause or otherwise instruct the communication unit 210 to transmit a tone corresponding to the selected option.

In addition, the example digital menu 300 also includes actionable indicia 340 that corresponds to a specific sequence of tones that correspond to an option to reach a live attendant, which is represented as “Representative.” In response to the selection of such indicia, the digital-to-audio option unit 260 can cause or otherwise instruct the communication unit 210 to transmit such specific sequence of tones. In certain embodiments, such as in the example digital menu 350, shown in FIG. 3B, the indicia 360 associated with the option to reach a live attendant (e.g., representative or an operator) may not be actionable. Instead, the indicia 360 can include a timer 370 that can convey a waiting period (e.g., a 45 second period is illustrated in FIG. 3B) prior to the called automated communication platform routing the call to the representative.

The digital-to-audio option unit 260 also can cause the display unit 220 to display options that can include forms with fillable fields in order to permit input of information pertinent to a selected option. As illustrated in FIG. 4, such options can be related to another option and can be presented as concatenated options in which a parent option, illustrated with the actionable indicia 320, can include one or more child options 410 and 440. In the example options 400, each of the child options 410 (“Routing No.”) and 440 (“Checking No.”) include respective fillable fields 420 and 450, each of which can permit inputting financial information, for example. It should be appreciated that, for certain parent options, child options can permit inputting location information, membership information, identifying information, a combination thereof, or the like. In addition, child options can include actionable indicia 430 that, in response to actuation, can instruct or otherwise direct the digital-to-audio option unit 260 to cause the communication device 200 to generate and/or transmit a sequence of tones corresponding to the option 320 and the information conveyed in the options 410 and 440. It should be appreciated that the sequence of tones also can include pauses and/or any trailing tones, such as the tone corresponding to a “#.” In certain embodiments, actuation of the actionable indicia 430 can cause the communication device 200, via the digital-to-audio option unit 260, for example, to generate other type of code information (e.g., numeric code(s) or alphanumeric code(s)) instead of a sequence of tones corresponding to the option 320 and the information conveyed in the options 410 and 440. At least a portion of the code information can be encoded or otherwise formatted according to a protocol suitable for transmission with an automated communication platform in communication with the communication device 200. While reference is made to parent-child relationships, it should be appreciated that such a relationship also can be represented a tree structure determined by first node associated with a first option (e.g., the option represented by indicia 320) and one or more second nodes related to the first node, where each of the one or more second nodes are associated with respective second options related to the first option. For instance, options 410 and 440 can embody two of such second options, where the first option is embodied in the option represented by indicia 320.

In certain embodiments, a digital menu can include one or more custom options that can be specific to an end-user and the automated communication platform (e.g., an automated attendant of a financial institution) that is called by the communication device 200. In one example, a custom option, such as option 510 in the example digital menu 500, shown in FIG. 5, can represent a specific sequence of selections that an end-user has historically chosen. As such, custom options can be included in a digital menu based on a learned behavior of a specific end-user. More specifically, in one example, the digital-to-audio option unit 260 can cause the communication device 200 to communicate one or more selections, or information indicative or representative thereof, to a digital menu server (such as the digital menu server 140). The digital menu server can accumulate historical information for a specific end-user and the automated communication platform 120, and can apply one or more machine-learning techniques in order to learn a behavior or option preference of the end-user. After an option preference is determined using the machine-learning technique(s), a customized digital menu associated with the automated communication platform can be generated to include one or more custom options for the end-user. Such a customized digital menu can be retained in an information storage integrated into or functionally coupled to the digital menu server.

FIG. 6 illustrates an example of an embodiment of a digital menu server 600 in accordance with one or more aspects of the disclosure. In certain scenarios, the digital menu server 600 can embody or can include the digital menu server 140 discussed with respect to the example operational environment 100 in FIG. 1. The digital menu server 600 can include a menu collection unit 610 that can initiate a communication (e.g., a call) with an automated communication platform (e.g., the automated communication platform 120) in order to traverse or otherwise navigate an automated audio menu that is available. In doing so, in one aspect, the menu collection unit 610 can determine a telephone number (e.g., 1-555-123-4567) associated with the automated communication platform. In addition, the menu collection unit 610 can collect one or more automated audio menu options provided by the called automated communication platform. In certain embodiments, the menu collection unit 610 can be embodied in or can comprise an autodialer device, which can include a speech recognition engine that can permit collection of one or more of the audio menu options. In order to traverse or navigate the automated audio menu, the menu collection unit 610 can leverage at least one of the I/O interface(s) 630. The at least one of the I/O interface(s) 630 can receive or otherwise access menu information indicative or otherwise representative of the automated audio menu available. The I/O interface(s) 630 can include at least one port (serial, parallel, or both), at least one Ethernet port, at least one pin, and can permit communication of information between the digital menu server 600 and the automated communication platform.

The menu collection unit 610 can initiate communication with an automated communication platform at predetermined times (e.g., according to a schedule or periodically) and/or in response to certain events. In certain embodiments, the menu collection unit 610 can utilize session initiation protocol (SIP) or other packet-switched communication protocols in order to initiate automatically a call session with the automated communication platform. In one aspect, after the communication is established, other communication may be utilized to receive menu information.

At least a portion of the menu information that can be received at the digital menu server 600 can be provided to a menu composition unit 620 that can configure data and/or metadata indicative or otherwise representative of an option in an automated audio menu made available by an automated communication platform. Therefore, the menu composition unit 620 can generate a digital menu of options corresponding to the options in the automated audio menu. Digital menus so generated can be retained in one or more memory elements 644 (referred to as menu(s) 644) in one or more memory devices 640 (referred to as storage 640).

Each of the menu(s) 644 retained in the storage 640 can be indexed or otherwise catalogued according to the telephone number or other type of communication address of the automated communication platform corresponding to a menu. Therefore, in one aspect, the digital menu server 600 can receive a telephone number or other type of communication address, and can determine if a corresponding digital menu is available. As described herein, in response to the corresponding digital menu being available, the digital menu server 600 can communicate such a menu to a device that provided the telephone number or other type of communication address.

In the illustrated embodiment, two or more of the menu collection unit 610; the menu composition unit 620; at least one of the I/O interface(s) 630; and the storage 640 can exchange information (e.g., data, metadata, and/or signaling) via a bus architecture 650. In one example, the bus architecture 650 can include at least one of a system bus, a memory bus, an address bus, or a message bus.

FIG. 7 illustrates an example embodiment of a digital menu server 700 that can permit generation of interactive digital menus associated with automated audio menus and/or generation of customized options (which also may be referred to as “option macros”) in accordance with one or more aspects of the disclosure. With respect to the generation of interactive digital menus, the digital menu server 700 can receive a telephone number or other type of communication address of an automated communication platform from a communication device (e.g., the communication device 110), and can determine that a digital menu associated with an automated audio menu for such a platform is unavailable. In response, the digital menu server 700 can instruct the communication device to log responses (which can include option selections) of the communication device to the automated audio menu. To at least such an end, in one aspect, the digital menu server 700 can direct (e.g., transmit a command to) the communication device to provide tone information indicative of one or more tones transmitted by the communication device to the automated communication platform. The digital menu server 700 can collect at least a portion of the tone information and can generate digital options corresponding to such information. Accordingly, in the illustrated embodiment, the digital menu server 700 can include a response collection unit 710 that can receive and parse the tone information in order to identify a menu option selected at the communication device. The parsed information can be supplied or otherwise conveyed to the menu composition unit 620 that can generate data and/or metadata indicative or otherwise representative of the menu option. The menu composition unit 620 can retain the data and/or metadata in a memory element of the storage 640 in order to create or update a record of the digital menu. Collection of tone information can be accumulated over several calls to the automated communication platform in order to generate a complete or substantially complete digital menu. In certain implementations, a digital menu so created can be directly validated or completed via a call to the automated communication platform by the menu collection unit 610, which can traverse or otherwise navigate the automated audio menu of the automated communication platform.

In one embodiment, a communication device can be instructed to provide tone information and an input audio signal representative of an audible option of the automated audio menu. Such a signal can be received by a microphone and/or other audio input unit(s) at the communication device in response to a speaker of the communication device generating audio representative of the audible option. At the digital menu server, the menu collection unit 610 can receive at least a portion of the input audio signal and can recognize speech indicative of the audible option. Therefore, in one aspect, the digital menu server 700 can identify the option in the automated audio menu and one or more tones associated with the option. As such, the menu composition unit 620 can generate data and/or metadata indicative of the option and/or the tone(s), and assign such data and/or metadata to a digital menu option corresponding to the audible option.

In connection with generation of a customized option, the response collection unit 710 also can acquire information indicative of a response to available options in a digital menu (e.g., digital menu 160) displayed at a communication device (e.g., communication device 110). The digital menu can correspond to an automated audio menu for an automated communication platform. The communication device can be in communication or otherwise functionally coupled to the digital menu server 700. The acquired information can be retained in the storage 640 and can be categorized according to identifying information (e.g., telephone number, subscriber number, IMSI, ESN, or the like) of the communication device. As such, the information can be categorized based at least on the communication device. The retained information so acquired can be utilized to generate an option macro for the communication device. More specifically, in one aspect, the digital menu server 700 can include a machine-learning unit 720 that can apply one or more machine-learning techniques in order to learn a selection behavior of the communication device (e.g., the communication device 110). After the selection behavior is determined, the menu composition unit 620 can generate a customized digital menu associated with the automated communication platform and with the communication device that communicates with the automated communication platform. In one example, the customized digital menu can include one or more custom options to access a service and/or information from the automated communication platform. Customized digital menus can be retained in one or more memory elements 730 (referred to as menu(s) 730) within the storage 640.

In the illustrated embodiment, two or more of the menu collection unit 610; the menu composition unit 620; at least one of the I/O interface(s) 630; the response collection unit 710; the machine-learning unit 720; and the storage 640 can exchange information (e.g., data, metadata, and/or signaling) via a bus architecture 650.

FIG. 8 illustrates a block diagram of an example computational environment 800 for replacement and/or augmentation of automated audio menus with interactive digital menus for automated response telephony systems in accordance with one or more aspects of the disclosure. The example computational environment 800 is merely illustrative and is not intended to suggest or otherwise convey any limitation as to the scope of use or functionality of the computational environment's architecture. In addition, the illustrative computational environment 800 depicted in FIG. 8 should not be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the example operational environments of the disclosure. The example computational environment 800 or portions thereof can embody or can constitute the operational environments described hereinbefore. As such, the computing device 810 can embody or can constitute, for example, any of the communication devices or digital menu servers described herein. In one example, the computing device 810 can be embodied in a portable personal computer or a handheld computing device, such as a mobile tablet computer, an electronic-book reader, a mobile telephone (e.g., a smartphone), and the like. In another example, the computing device 810 can be embodied in a wearable computing device, such as a watch, goggles or head-mounted visors, or the like. In yet another example, the computing device 810 can be embodied in portable consumer electronics equipment, such as a camera, a portable television set, a gaming console, a navigation device, a voice-over-internet-protocol telephone, a media playback device, or the like.

The computational environment 800 represents an example implementation of the various aspects or features of the disclosure in which the processing or execution of operations described in connection with the replacement and/or augmentation of automated audio menus with interactive digital menus disclosed herein can be performed in response to execution of one or more software components at the computing device 810. It should be appreciated that the one or more software components can render the computing device 810, or any other computing device that contains such components, a particular machine for the replacement and/or augmentation of automated audio menus with interactive digital menus as described herein, among other functional purposes. A software component can be embodied in or can comprise one or more computer-accessible instructions, e.g., computer-readable and/or computer-executable instructions. In one scenario, at least a portion of the computer-accessible instructions can embody and/or can be executed to perform at least a part of one or more of the example methods described herein, such as the example method presented in FIGS. 9-13. For instance, to embody one such method, at least the portion of the computer-accessible instructions can be persisted (e.g., stored, made available, or stored and made available) in a computer storage non-transitory medium and executed by a processor. The one or more computer-accessible instructions that embody a software component can be assembled into one or more program modules, for example, that can be compiled, linked, and/or executed at the computing device 810 or other computing devices. Generally, such program modules comprise computer code, routines, programs, objects, components, information structures (e.g., data structures and/or metadata structures), etc., that can perform particular tasks (e.g., one or more operations) in response to execution by one or more processors, which can be integrated into the computing device 810 or functionally coupled thereto.

The various example embodiments of the disclosure can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that can be suitable for implementation of various aspects or features of the disclosure in connection with the replacement and/or augmentation of automated audio menus with interactive digital menus described herein can comprise personal computers; server computers; laptop devices; handheld computing devices, such as mobile tablets or e-readers; wearable computing devices; and multiprocessor systems. Additional examples can include set-top boxes, programmable consumer electronics, network personal computers (PCs), minicomputers, mainframe computers, blade computers, programmable logic controllers, distributed computing environments that comprise any of the above systems or devices, and the like.

As illustrated, the computing device 810 can comprise one or more processors 814, one or more input/output (I/O) interfaces 816, a memory 830, and a bus architecture 832 (also termed bus 832) that functionally couples various functional elements of the computing device 810. In certain embodiments, the computing device 810 can include, optionally, a radio unit 812. The radio unit 812 can include one or more antennas and a communication processing unit that can permit wireless communication between the computing device 810 and another device, such as one of the computing device(s) 870. The bus 832 can include at least one of a system bus, a memory bus, an address bus, or a message bus, and can permit the exchange of information (data, metadata, and/or signaling) between the processor(s) 814, the I/O interface(s) 816, and/or the memory 830, or respective functional elements therein. In certain scenarios, the bus 832 in conjunction with one or more internal programming interfaces 850 (also referred to as interface(s) 850) can permit such exchange of information. In scenarios in which the processor(s) 814 include multiple processors, the computing device 810 can utilize parallel computing.

The I/O interface(s) 816 can permit communication of information between the computing device and an external device, such as another computing device, e.g., a network element or an end-user device. Such communication can include direct communication or indirect communication, such as the exchange of information between the computing device 810 and the external device via a network or elements thereof. As illustrated, the I/O interface(s) 816 can comprise one or more of network adapter(s) 818, peripheral adapter(s) 822, and display unit(s) 826. Such adapter(s) can permit or facilitate connectivity between the external device and one or more of the processor(s) 814 or the memory 830. For example, the peripheral adapter(s) 822 can include a group of ports, which can include at least one of parallel ports, serial ports, Ethernet ports, V.35 ports, or X.21 ports. In certain embodiments, the parallel ports can comprise General Purpose Interface Bus (GPIB), IEEE-1284, while the serial ports can include Recommended Standard (RS)-232, V.11, Universal Serial Bus (USB), FireWire or IEEE-1394.

In one aspect, at least one of the network adapter(s) 818 can functionally couple the computing device 810 to one or more computing devices 870 via one or more traffic and signaling pipes 860 that can permit or facilitate the exchange of traffic 862 and signaling 864 between the computing device 810 and the one or more computing devices 870. Such network coupling provided at least in part by the at least one of the network adapter(s) 818 can be implemented in a wired environment, a wireless environment, or both. The information that is communicated by the at least one of the network adapter(s) 818 can result from the implementation of one or more operations of a method in accordance with aspects of this disclosure. Such output can be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. In certain scenarios, each of the computing device(s) 870 can have substantially the same architecture as the computing device 810. In addition or in the alternative, the display unit(s) 826 can include functional elements (e.g., lights, such as light-emitting diodes; a display, such as a liquid crystal display (LCD), a plasma monitor, a light-emitting diode (LED) monitor, or an electrochromic monitor; combinations thereof; or the like) that can permit control of the operation of the computing device 810, or can permit conveying or revealing the operational conditions of the computing device 810.

In one aspect, the bus 832 represents one or more of several possible types of bus structures, including a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. As an illustration, such architectures can comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express bus, a Personal Computer Memory Card International Association (PCMCIA) bus, a Universal Serial Bus (USB), and the like. The bus 832, and all buses described herein can be implemented over a wired or wireless network connection and each of the subsystems, including the processor(s) 814, the memory 830 and memory elements therein, and the I/O interface(s) 816 can be contained within one or more remote computing devices 870 at physically separate locations, connected through buses of this form, in effect implementing a fully distributed system. In certain embodiments, such a distributed system can implement the functionality described herein in a client-host or client-server configuration in which the audio-to-digital menu component(s) 836 or the audio-to-digital menu information 840, or both, can be distributed between the computing device 810 and at least one of the computing device(s) 870, and the computing device 810 and at least one of the computing device(s) 870 can execute such components and/or leverage such information. It should be appreciated that, in an embodiment in which the computing device 810 embodies or constitutes a communication device, the audio-to-digital menu component(s) 836 can be different from those in an embodiment in which the computing device 810 embodies or constitutes a digital menu server.

The computing device 810 can comprise a variety of computer-readable media. Computer-readable media can be any available media (transitory and non-transitory) that can be accessed by a computing device. In one aspect, computer-readable media can comprise computer non-transitory storage media (or computer-readable non-transitory storage media) and communications media. Example computer-readable non-transitory storage media can be any available media that can be accessed by the computing device 810, and can comprise, for example, both volatile and non-volatile media, and removable and/or non-removable media. In one aspect, the memory 830 can comprise computer-readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read-only memory (ROM).

The memory 830 can comprise functionality instructions storage 834 and functionality information storage 838. The functionality instructions storage 834 can comprise computer-accessible instructions that, in response to execution (by at least one of the processor(s) 814), can implement one or more of the functionalities of the disclosure. The computer-accessible instructions can embody or can comprise one or more software components illustrated as audio-to-digital menu component(s) 836. In one scenario, execution of at least one component of the audio-to-digital menu component(s) 836 can implement one or more of the methods described herein, such as the example method 900. For instance, such execution can cause a processor (e.g., one of the processor(s) 814) that executes the at least one component to carry out a disclosed example method. It should be appreciated that, in one aspect, a processor of the processor(s) 814 that executes at least one of the audio-to-digital menu component(s) 836 can retrieve information from or retain information in one or more memory elements 840 in the functionality information storage 838 in order to operate in accordance with the functionality programmed or otherwise configured by the audio-to-digital menu component(s) 836. The one or more memory elements 840 may be referred to as audio-to-digital menu information 840. Such information can include at least one of code instructions, information structures, or the like. For instance, at least a portion of such information structures can be indicative or otherwise representative of digital menus of options associated with an automated communication platform.

In certain embodiments, one or more of the audio-to-digital menu component(s) 836 can embody or can constitute one or more of the digital-to-audio option unit 260 and/or at least a portion of the communication unit 210, and can provide the functionality of such unit(s) in accordance with aspects of this disclosure. In other embodiments, one or more of the audio-to-digital menu component(s) 836 in combination with at least one of the processor(s) 814 can embody or can constitute one or more of the digital-to-audio option unit 260 or at least a portion of the communication unit 210, and can provide the functionality of such unit(s) in accordance with aspects of this disclosure. In scenarios in which the computing device 810 can embody a digital menu server (e.g., digital menu server 140), one or more of the audio-to-digital menu component(s) 836 can embody or can constitute one or more of the menu collection unit 610, the menu composition unit 620, the response collection unit 710, and/or the machine-learning unit 720, and can provide the functionality of such unit(s) in accordance with aspects of this disclosure.

At least one of the one or more interfaces 850 (e.g., application programming interface(s)) can permit or facilitate communication of information between two or more components within the functionality instructions storage 834. The information that is communicated by the at least one interface can result from implementation of one or more operations in a method of the disclosure. In certain embodiments, one or more of the functionality instructions storage 834 and the functionality information storage 838 can be embodied in or can comprise removable/non-removable, and/or volatile/non-volatile computer storage media.

At least a portion of at least one of the audio-to-digital menu component(s) 836 or the audio-to-digital menu information 840 can program or otherwise configure one or more of the processors 814 to operate at least in accordance with the functionality described herein. One or more of the processor(s) 814 can execute at least one of the audio-to-digital menu component(s) 836 and leverage at least a portion of the information in the functionality information storage 838 in order to provide replacement and/or augmentation of automated audio menus with interactive digital menus in accordance with one or more aspects described herein.

It should be appreciated that, in certain scenarios, the functionality instruction(s) storage 834 can embody or can comprise a computer-readable non-transitory storage medium having computer-accessible instructions that, in response to execution, cause at least one processor (e.g., one or more of the processor(s) 814) to perform a group of operations comprising the operations or blocks described in connection with the disclosed methods.

In addition, the memory 830 can comprise computer-accessible instructions and information (e.g., data, metadata, and/or programming code instructions) that permit or facilitate the operation and/or administration (e.g., upgrades, software installation, any other configuration, or the like) of the computing device 810. Accordingly, as illustrated, the memory 830 can comprise a memory element 842 (labeled operating system (OS) instruction(s) 842) that contains one or more program modules that embody or include one or more operating systems, such as Windows operating system, Unix, Linux, Symbian, Android, Chromium, and substantially any OS suitable for mobile computing devices or tethered computing devices. In one aspect, the operational and/or architectural complexity of the computing device 810 can dictate a suitable OS. The memory 830 also comprises a system information storage 846 having data, metadata, and/or programming code that permits or facilitates the operation and/or administration of the computing device 810. Elements of the OS instruction(s) 842 and the system information storage 846 can be accessible or can be operated on by at least one of the processor(s) 814.

It should be recognized that while the functionality instructions storage 834 and other executable program components, such as the OS instruction(s) 842, are illustrated herein as discrete blocks, such software components can reside at various times in different memory components of the computing device 810, and can be executed by at least one of the processor(s) 814. In certain scenarios, an implementation of the audio-to-digital menu component(s) 836 can be retained on or transmitted across some form of computer-readable media.

The computing device 810 and/or one of the computing device(s) 870 can include a power supply (not shown), which can power up components or functional elements within such devices. The power supply can be a rechargeable power supply, e.g., a rechargeable battery, and it can include one or more transformers to achieve a power level suitable for the operation of the computing device 810 and/or one of the computing device(s) 870, and components, functional elements, and related circuitry therein. In certain scenarios, the power supply can be attached to a conventional power grid to recharge and ensure that such devices can be operational. In one aspect, the power supply can include an I/O interface (e.g., one of the network adapter(s) 818) to connect operationally to the conventional power grid. In another aspect, the power supply can include an energy conversion component, such as a solar panel, to provide additional or alternative power resources or autonomy for the computing device 810 and/or one of the computing device(s) 870.

The computing device 810 can operate in a networked environment by utilizing connections to one or more remote computing devices 870. As an illustration, a remote computing device can be a personal computer, a portable computer, a server, a router, a network computer, a peer device or other common network node, and so on. As described herein, connections (physical and/or logical) between the computing device 810 and a computing device of the one or more remote computing devices 870 can be made via one or more traffic and signaling pipes 860, which can comprise wired link(s) and/or wireless link(s) and several network elements (such as routers or switches, concentrators, servers, and the like) that form a local area network (LAN), a wide area network (WAN), and/or other networks (wireless or wired) having different footprints. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, local area networks, and wide area networks.

In one or more embodiments, one or more of the disclosed methods can be practiced in distributed computing environments, such as grid-based environments, where tasks can be performed by remote processing devices (computing device(s) 870) that are functionally coupled (e.g., communicatively linked or otherwise coupled) through a network having traffic and signaling pipes and related network elements. In a distributed computing environment, in one aspect, one or more software components (such as program modules) can be located in both a local computing device and at least one remote computing device.

In view of the aspects described herein, example methods that can be implemented in accordance with the disclosure can be better appreciated with reference, for example, to the flowcharts in FIGS. 9-13. For purposes of simplicity of explanation, the example method disclosed herein is presented and described as a series of blocks (with each block representing an action or an operation in a method, for example). However, it is to be understood and appreciated that the disclosed method is not limited by the order of blocks and associated actions or operations, as some blocks may occur in different orders and/or concurrently with other blocks from those that are shown and described herein. For example, the various methods (or processes or techniques) in accordance with this disclosure can be alternatively represented as a series of interrelated states or events, such as in a state diagram. Furthermore, not all illustrated blocks, and associated action(s), may be required to implement a method in accordance with one or more aspects of the disclosure. Further yet, two or more of the disclosed methods or processes can be implemented in combination with each other, to accomplish one or more features or advantages described herein.

It should be appreciated that the methods in accordance with this disclosure can be retained on an article of manufacture, or computer-readable medium, to permit or facilitate transporting and transferring such methods to a computing device (such as a two-way communication device, such as a mobile smartphone or a voice-over-IP tethered telephone; a blade computer; a programmable logic controller; and the like) for execution, and thus implementation, by a processor of the computing device or for storage in a memory thereof or functionally coupled thereto. In one aspect, one or more processors, such as processor(s) that implement (e.g., execute) one or more of the disclosed methods, can be employed to execute code instructions retained in a memory, or any computer- or machine-readable medium, to implement one or more of the disclosed methods. The code instructions can provide a computer-executable or machine-executable framework to implement the methods described herein.

FIG. 9 presents a flowchart of an example method 900 for replacing an automated audio menu of an automated communication platform with an interactive digital menu according to at least certain aspects of the disclosure. In certain embodiments, a communication device (e.g., a mobile smartphone, a voice-over-IP tethered telephone, etc.) having one or more processors or being functionally coupled to at least one processor can implement (e.g., compile, execute, compile and execute, or the like) one or more blocks of the example method 900. The one or more processors of the at least one processor can be functionally coupled to one or more memory devices having encoded thereon computer-accessible instructions that, when executed, can permit implementation of one or more blocks of the subject example method. In one scenario, for example, the example communication device 200 can implement the subject example method.

At block 910, a call with the automated communication platform can be initiated by the communication device. In one example, initiating the call can include receiving a request to connect with the automated communication platform, where the request can be received at the communication device. The request to connect can be associated with numerous types of call sessions (either voice calls or data calls) that may be established or otherwise performed, at least in part, via the communication device. The types of call sessions can include, for example, calls implemented using a cellular radio telecommunication protocol (such as voice calls over cellular telecommunication networks); calls implemented using VoIP protocols; and videotelephony calls implemented using suitable protocols for call initiation and media stream communication (e.g., transmission and/or reception). The automated communication platform (e.g., an IVR system, an automated attendant, or the like) can utilize an automated audio menu of options for interaction with the automated communication platform. At block 920, a communication address (e.g., a telephone number, a subscriber number, an IMSI, an ESN, an IP address, a SIP address, a MAC address, or the like) of the automated communication platform can be communicated or otherwise sent (e.g., transmitted) to a remote computing device (e.g., a digital menu server, as described herein). The communication address can be included or otherwise conveyed in the request to connect, and can be transmitted by the communication device. In one aspect, the remote computing device can determine that a digital menu of options for interaction with the automated communication platform is available. To at least such an end, in one example, the remote computing device can query a database or other data storage for the digital menu of options, where a search query can include the communication address. In response to the query, the remote computing device can receive the digital menu of options, thus establishing that such a menu is available. As such, in one aspect, at block 930, the digital menu of options can be received at the communication device. In one aspect, the digital menu of options is received as option information indicative of each of the options included in the digital menu of options.

At block 940, the communication device that initiated the call at block 910 can determine if the received digital menu of options is valid. In certain embodiments, the communication device can compare audio input signal from the automated communication platform with a reference audio signal (e.g., a segment of a few seconds of digitized audio) that is expected to be received from such a platform. More specifically, the remote computing device that can communicate the digital menu also can provide the reference audio signal, which may correspond to the audio signal that is expected to be received from the automated communication platform after a certain time interval has elapsed since the call is initiated. In addition, the communication device can compare a sequence of frequencies of the audio input signal with a sequence of frequencies of the reference audio signal. A comparison that establishes that both such signals have the same or substantially the same sequences can indicate the validity of the digital menu. In other embodiments, the communication device can include a speech-to-text unit that can convert to text a portion of audio input signal from the automated communication platform. The text so generated can be compared with at least a portion of the digital menu in order to establish that the digital menu is consistent with an automated audio menu of the automated communication platform. A determination that the digital menu is invalid can result in implementing exception handling at block 950. For instance, the communication device can display or otherwise present a message indicating that the received digital menu is invalid. For instance, as described herein, the message “Digital Menu is Outdated” or the message “Audio-to-Digital Menu Service is Currently Unavailable” may be displayed at the communication device.

A determination at block 940 that the digital menu is valid can direct the flow of the subject example method to block 960, at which multiple actionable indicia, such as clickable buttons or other selectable display options, can be displayed at the communication device. At least one of the multiple actionable indicia can correspond to a respective option in the automated audio menu. In addition or in the alternative, one or more of the multiple actionable indicia can represent shortcuts for specific responses to the automated communication platform and/or options for responses customized to the communication device and the automated communication platform. In addition or in the alternative, in certain embodiments of the subject example, certain non-actionable indicia can displayed at the communication device. For instance, indicia representative of a timer indicative of an time interval remaining to transfer the call to a live attendant. As described herein, one or more of the displayed indicia (actionable or otherwise) can be highlighted or otherwise visually emphasized in order to present such indicia distinctively to convey a popular or otherwise special option(s), customized option(s), or live attendant option(s).

In certain embodiments, presenting or otherwise displaying certain visually emphasized options can include parsing the option information indicative or otherwise representative of the digital menu options for a keyword or a phrase indicative of a live attendant option to reach at least one of a representative or an operator. After one or more of such keywords or phrases are identified or otherwise selected, displaying the multiple actionable indicia can include visually emphasizing at least one of the multiple actionable indicia to represent the live attendant option.

In addition, in certain embodiments, displaying the multiple actionable indicia can include displaying second actionable indicia representative of concatenated options in the digital menu, and wherein the second actionable indicia represents a tree structure of options having a first node associated with a first option and a second node associated with a second option, the second actionable indicia comprising one or more fillable fields to input information associated with the first option. In addition or in the alternative, displaying the multiple actionable indicia can include displaying a second actionable indicia representative of a customized option for the automated communication platform, and wherein the customized option is representative of historical selections.

At block 970 a selection of one of the multiple actionable indicia can be received at the communication device. At block 980, the communication device can generate code information according to the received selection. The code information can be embodied in or can include one or more tones (e.g., one or more DTMF tones), numeric data, alphanumeric data, or the like. At least a portion of the code information can be encoded or otherwise formatted, for example, according to a protocol for digital signaling suitable for communication with the automated communication platform. At block 990, the communication device can communicate (e.g., send or transmit) the generated code information (e.g., a generated sequence of tones) to the automated communication platform.

While not illustrated in FIG. 9, in certain embodiments, the subject example method can include receiving, from the automated communication platform, an audio input signal representative of at least one option of the automated audio menu of options for interaction with the automated communication platform, and causing a speaker of the communication device to generate audio corresponding to at least a portion of the audio input signal. As such, an aural representation of the automated audio menu can be presented to an end-user of the communication device. In certain implementations, the audio input signal can permit determining accuracy of the digital menu in accordance with aspects described herein in connection with block 940.

FIG. 10 presents a flowchart of an example method 1000 for producing a digital menu corresponding to an automated audio menu for an automated communication platform in accordance with one or more aspects of the disclosure. In one embodiment, a communication device as described herein (e.g., communication device 110) can implement the subject example method in its entirety or in part. In certain embodiments, such a communication device can include one or more processors functionally coupled to one or more memory devices, where the memory device(s) can have encoded thereon computer-accessible instructions that, when executed, can permit implementation of one or more blocks of the subject example method. At block 1010, a call with the automated communication platform can be initiated by the communication device. As described herein, in one example, initiating the call can include receiving a request to connect with the automated communication platform, where the request can be received at the communication device. The request to connect can be associated with numerous types of call sessions (either voice calls or data calls) that may be established or otherwise performed, at least in part, via the communication device. The types of call sessions can include, for example, calls implemented using a cellular radio telecommunication protocol; calls implemented using VoIP protocols; and videotelephony calls implemented using suitable protocols for call initiation and media stream communication. In addition, the automated communication platform (e.g., an IVR system, an automated attendant, or the like) can utilize an automated audio menu of options for interaction with the automated communication platform. At block 1020, a communication address (e.g., a telephone number, a subscriber number, an IMSI, an ESN, an IP address, a SIP address, a MAC address, or the like) of the automated communication platform can be communicated to a remote computing device (e.g., a digital menu server, as described herein). In one aspect, the remote computing device can determine that a digital menu of options for interaction with the automated communication platform is unavailable. As such, in one example, the remote computing device can query a database or other data storage for the digital menu of options, where a search query can include the communication address. In response to the query, the remote computing device can receive signaling or other information indicative or otherwise representative of the digital menu of options being unavailable. Therefore, at block 1030, an indication that the digital menu is unavailable can be received at the communication device.

In response to the digital menu being unavailable, at block 1040, an audio input signal representative of one or more audible options of the automated audio menu of options can be received at the communication device. At block 1050, a textual representation of at least a portion of the received audio input signal can be determined. In one embodiment, the communication device can recognize speech conveyed in at least the portion of the audio input signal, and can convert the recognized speech to text. For instance, a speech-to-text unit integrated into or functionally coupled to the communication device can apply one or more models of speech (e.g., hidden Markov model(s)) to at least the portion of the audio input signal in order to generate speech associated with the audio input signal, where the generated speech can represent at least a portion of the one or more audible options. The speech-to-text unit can determine the textual representation using the generated speech. At block 1060, one or more digital options can be generated using the textual representation of the one or more audible options. Each of the one or more digital options can correspond to the one or more audible options. At block 1070, a digital menu associated with the automated audio menu can be generated using at least one of the one or more digital non-audio options.

At block 1080, it can be determined if an audible option of the one or more audible options is selected. For instance, the communication device that received the audio input signal at block 1040 can determine that input information (e.g., signaling) indicative of interaction with a user interface of the communication device is received. When the input information does not terminate the call between the communication device and the automated communication platform, the input information can represent the selection of an audible option. In response to ascertaining that the audible option is not selected, the subject example method can end. In the alternative, in response to ascertaining that the audible option is selected, the flow of the example method 1000 can be directed to block 1040. Such redirection of the flow can permit receiving additional audible options associated with the automated audio menu, and thus, a tree or other tier(s) of audible option(s) can be received and incorporated into the digital menu via the implementation of blocks 1040-1070.

While not shown, in certain embodiments, digital menu generated by implementing the example method 1000 can be communicated or otherwise provided to the remote computing device (e.g., a digital menu server) that provided the indication received at block 1030. The remote computing device can retain the digital menu in a repository (e.g., storage 150) or other storage, and can associate the digital menu to the communication address of the automated communication platform. Therefore, the digital menu may be available in other instances when the communication device initiates a call with the automated communication platform.

It should be appreciated that in certain embodiments, one or more of blocks 1040 through 1070 can be implemented by a computing device remote to the communication device that initiates the call with the automated communication platform. In one example, the computing device can be embodied in or can include the remote computing device (e.g., the digital menu server 140) that provided the indication received at block 1030. In another example, the computing device can include a speech-to-text server that can be functionally coupled to the remote computing device (e.g., the digital menu server 140) that provided the indication received at block 1030. In any instance, in such embodiments, the communication device (e.g., communication device 110) can communicate audio signal received from the automated communication platform to the computing device for implementation of one or more of the blocks 1040-1070. In addition, in response to receiving an audible selection at block 1080, the communication device can communicate information indicative or otherwise representative of such a selection to the computing device, which can continue implementing one or more of the blocks 1040-1070 in order to develop the digital menu as described herein.

As an illustration of generation of digital menus by a computing device remote to a communication device, FIG. 11 presents a flowchart of an example method 1100 for producing a digital menu corresponding to an automated audio menu of options for an automated communication platform in accordance with one or more aspects of the disclosure. A computing device (such as a digital menu server in accordance with this disclosure) can implement the subject example method in its entirety or in part. The computing device can include one or more processors or can be functionally coupled to at least one processor, where at least one of the one or more processors can be functionally coupled to one or more memory devices, where the memory device(s) can have encoded thereon computer-accessible instructions that, when executed, can permit implementation of one or more blocks of the subject example method. At block 1110, a call with the automated communication platform can be established via the computing device. The call session can established at predetermined times (e.g., according to a schedule or periodically) and/or in response to certain events. In one embodiment, the computing device can include an autodialer device (which also may be referred to as autodialer) and/or other communication unit that can permit automatically or semi-automatically calling the automated communication platform. In certain implementations, the computing device can utilize or otherwise leverage SIP or other packet-switched communication protocols in order to establish such a call session.

At block 1120, an audio input signal representative of one or more audible options of the automated audio menu of options for the called automated communication platform can be received at the communication device. At block 1130, a textual representation of at least a portion of the received audio input signal can be determined. In one embodiment, the computing device (e.g., the digital menu server 140) can recognize speech conveyed in at least the portion of the audio input signal, and can convert the recognized speech to text. For instance, a speech-to-text unit integrated into or functionally coupled to the computing device can apply one or more models of speech (e.g., hidden Markov model(s))) to at least the portion of the audio input signal in order to generate speech associated with the audio input signal, where the generated speech can represent at least a portion of the one or more audible options. The speech-to-text unit can determine the textual representation using the generated speech. At block 1140, one or more digital non-audio options can be generated using the textual representation of the one or more audible options. Each of the one or more digital non-audio options can correspond to the one or more audible options. At block 1150, a digital menu associated with the automated audio menu can be generated using at least one of the one or more digital non-audio options.

At block 1160, it can be determined if the computing device that implements the subject example method is to select an audible option of the one or more audible options in block 1120. A determination that the audible option is not to be selected can result in the termination of the subject example method. In the alternative, a determination that the audible option is to be selected can lead to block 1170, at which the computing device can generate code information according to the selected audible option. The code information can be embodied in or can include a sequence of tones (e.g., one or more DTMF tones), numeric data, alphanumeric data, or the like. At least a portion of the code information can be encoded or otherwise formatted, for example, according to a protocol for digital signaling suitable for communication with the automated communication platform. In one example, the computing device can parse a digital option associated with the selected audible option and can determine a combination of digit(s) (e.g., 0, 1, 2, 3, 4, 5, 6, 7, 8, and/or 9), “#” symbol, and “*” symbol for the selected audible option. An autodialer device that may be integrated into the computing device can generate the sequence of tones for such a combination (e.g., “*3”). At block 1180, the computing device can communicate the generated sequence of tones or the other type of code information to the automated communication platform, and flow can be directed to block 1120. It should be appreciated that, in one aspect, selection of the audible option and implementation of at least blocks 1170 and 1180 can permit traversing, at least in part, the automated audio menu by causing the automated communication platform to communicate additional audio signal representative or otherwise indicative of other audible options in the automated audio menu. Therefore, a tree or other tier(s) of audible option(s) can be received and incorporated into the digital menu via the implementation of blocks 1120-1150.

FIG. 12 presents a flowchart of an example method 1200 for producing a digital menu of options corresponding to an automated audio menu of options for an automated communication platform in accordance with one or more aspects of the disclosure. Similarly to the example method 1100, a computing device (such as a digital menu server in accordance with this disclosure) can implement the subject example method in its entirety or in part. As it can be appreciated the example method 1200 does not leverage communication with the automated communication platform. Instead, in one aspect, information indicative or representative of at least a portion of the automated audio menu can be utilized to compose the digital menu. More specifically, at block 1210, a communication address (e.g., a telephone number, a subscriber number, an IMSI, an ESN, an IP address, a SIP address, a MAC address, or the like) of the automated communication platform can be received at the computing device. At block 1220, a repository can be queried for content containing the communication address. The repository can be embodied in or can include any centralized or distributed platform (e.g., shared memory devices in a server cluster) that can retain information. In one example, the repository can be embodied in or can include a distributed database (relational or otherwise). In another example, the repository can be embodied in or can include memory devices forming a network, such as the Internet or other WAN; a LAN; a MAN, a HAN, and/or a PAN. As such, files, documents, databases, hyperlinks, combinations thereof, or the like, can be queried for content containing the communication address.

At block 1230, one or more audio menu options in the automated audio menu of the automated communication platform can be determined using the content that can be access in response to implementation of the block 1230. In one implementation, the audio menu option(s) can be determined by searching for strings of characters, words, or phrases, the typically are included in automated audio menus—e.g., “1 for” “2 for,” “3 for,” “4 for,” “5 for,” “6 for,” “7 for,” “8 for,” “9 for,” “0 for,” “for payment,” “representative,” or the like. At block 1240, one or more digital menu options corresponding to the one or more audio menu options can be generated. At block 1240, a digital menu of options associated with the automated audio menu can be generated.

FIG. 13 presents a flowchart of another example method 1300 for producing visual options corresponding to an automated audio menu of options for an automated communication platform in accordance with one or more aspects of the disclosure. In certain embodiments, the subject example method can be implemented, at least in part, by a communication device (e.g., a mobile telephone or any other two-way communication device, such a communication device 110, communication device 200, computing device 810, or the like) that can establish a call or a connection with the automated communication platform in accordance with aspects of the disclosure. In certain embodiments, such a communication device can include one or more processors functionally coupled to one or more memory devices, where the memory device(s) can have encoded thereon computer-accessible instructions that, when executed, can permit implementation of one or more blocks of the subject example method.

At block 1310, a call with an automated communication platform can be initiated by the communication device. As described herein, in one example, initiating the call can include receiving a request to connect with the automated communication platform, where the request can be received at the communication device. The request to connect can be associated with numerous types of call sessions (either voice calls or data calls) that may be established or otherwise performed, at least in part, via the communication device. The types of call sessions can include, for example, calls implemented using a cellular radio telecommunication protocol; calls implemented using VoIP protocols; and videotelephony calls implemented using suitable protocols for call initiation and media stream communication. At block 1320, it can be determined if the automated communication platform has been previously called. In one example, the communication device can search for a telephone number of the automated communication platform in records of previous calls (e.g., a history or log of recent calls) performed by the communication device. Such a search can be performed locally at the communication device, or the communication device can query historical records of calls at a remote computing device. A negative determination at block 1320 can lead to block 1330, at which exception handling is implemented. Implementing exception handling can include, for example, proceeding with the call without utilizing or leveraging historical information as described in greater hereafter.

A positive determination at block 1320 can lead to block 1340, at which one or more historical options selected for interaction with the automation communication platform can be determined or otherwise identified. Such historical option(s) can include an option that may have been selected from an automated audio menu or a digital menu associated with the automated communication platform. In certain embodiments, the communication platform that initiates the call with the automated communication platform can query a remote computing device and/or remote information storage for the one or more historical options associated with previous selections in previous interactions (calls or communications) with the automated communication platform. Such historical options may be referred to as “historical selections” or, more simply, as “historical options.” In response to the query, the remote computing device and/or the information storage can provide (e.g., transmit) information indicative of the one or more historical options. As such, in one aspect, the communication device can determine or otherwise identify previously selected options for interaction with the automated communication platform. At block 1350, the communication device can present (e.g., display) an option to select at least one of the one or more historical options. For example, the communication device can prompt an end-user of the communication device to select the one or more historical options. Prompting the end-user can include displaying indicia indicative of a prior selection and/or indicia indicative of a request for confirmation of such a selection as a newly selected option to interact with the automated communication platform. More specifically, in one example, the communication device can display a banner or other graphical object conveying the following: “The last time you called this number you selected ‘*9’ to reach a live attendant, would you like it to be your current selection and that it be entered again automatically when appropriate?” It should be appreciated that other information can be conveyed when prompting the end-user to confirm a previously selected option.

At block 1360, it can be determined if the one or more historical options are selected or otherwise confirmed. A negative determination—e.g., the previous selection is not confirmed—can lead to block 1370 in which exception handling is implemented. An affirmative determination—e.g., the previous selection is confirmed—can lead to block 1380, at which code information is generated according to the confirmed selection. In the one example, the code information can be embodied in or can include a sequence of one or more tones (e.g., one or more DTMF tones); a sequence of one or more numeric codes; or a sequence of one or more alphanumeric codes, where at least one of such sequences can be representative of the confirmed historical selection(s). At block 1390, the generated code information (e.g., sequence of tone(s), sequence of numeric code(s), or sequence of alphanumeric tone(s)) can be communicated (e.g., transmitted) to the automated communication platform.

It should be appreciated that, in certain implementations, the subject example method can be carried out in combination with one or more other methods described herein, such as example method 900. It also should be appreciated that, in certain implementations, the example method 1300 can include other blocks related to receiving and/or validating a digital menu in response to initiating or establishing a connection with an automated communication platform. For instance, blocks 920-960 may be included in the example method 1300.

Various embodiments of the disclosure may take the form of an entirely or partially hardware embodiment, an entirely or partially software embodiment, or a combination of software and hardware (e.g., a firmware embodiment). Furthermore, as described herein, various embodiments of the disclosure (e.g., methods and systems) may take the form of a computer program product comprising a computer-readable non-transitory storage medium having computer-accessible instructions (e.g., computer-readable and/or computer-executable instructions) such as computer software, encoded or otherwise embodied in such storage medium. Those instructions can be read or otherwise accessed and executed by one or more processors to perform or permit the performance of the operations described herein. The instructions can be provided in any suitable form, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, assembler code, combinations of the foregoing, and the like. Any suitable computer-readable non-transitory storage medium may be utilized to form the computer program product. For instance, the computer-readable medium may include any tangible non-transitory medium for storing information in a form readable or otherwise accessible by one or more computers or processor(s) functionally coupled thereto. Non-transitory storage media can include read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory, etc.

Embodiments of the operational environments and methods (or techniques) are described herein with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It can be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer-accessible instructions. In certain implementations, the computer-accessible instructions may be loaded or otherwise incorporated into onto a general purpose computer, special purpose computer, or other programmable information processing apparatus to produce a particular machine, such that the operations or functions specified in the flowchart block or blocks can be implemented in response to execution at the computer or processing apparatus.

Unless otherwise expressly stated, it is in no way intended that any protocol, procedure, process, or method set forth herein be construed as requiring that its acts or steps be performed in a specific order. Accordingly, where a process or a method claim does not actually recite an order to be followed by its acts or steps or it is not otherwise specifically recited in the claims or descriptions of the subject disclosure that the steps are to be limited to a specific order, it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to the arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; the number or type of embodiments described in the specification or annexed drawings, or the like.

As used in this application, the terms “component,” “environment,” “system,” “architecture,” “interface,” “unit,” “module,” “pipe,” and the like are intended to refer to a computer-related entity or an entity related to an operational apparatus with one or more specific functionalities. Such entities may be either hardware, a combination of hardware and software, software, or software in execution. As an example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable portion of software, a thread of execution, a program, and/or a computing device. For example, both a software application executing on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution. A component may be localized on one computing device or distributed between two or more computing devices. As described herein, a component can execute from various computer-readable non-transitory media having various data structures stored thereon. Components can communicate via local and/or remote processes in accordance, for example, with a signal (either analogic or digital) having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry that is controlled by a software application or firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and can execute at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts, and the electronic components can include a processor therein to execute software or firmware that provides, at least in part, the functionality of the electronic components. In certain embodiments, components can communicate via local and/or remote processes in accordance, for example, with a signal (either analog or digital) having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal). In other embodiments, components can communicate or otherwise be coupled via thermal, mechanical, electrical, and/or electromechanical coupling mechanisms (such as conduits, connectors, combinations thereof, or the like). An interface can include input/output (I/O) components as well as associated processors, applications, and/or other programming components. The terms “component,” “environment,” “system,” “architecture,” “interface,” “unit,” “module,” and “pipe” can be utilized interchangeably and can be referred to collectively as functional elements.

As utilized in this disclosure, the term “processor” can refer to any computing processing unit or device comprising single-core processors; single processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, a processor can refer to an integrated circuit (IC), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented as a combination of computing processing units. In certain embodiments, processors can utilize nanoscale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance the performance of user equipment or other electronic equipment.

In addition, in the present specification and annexed drawings, terms such as “store,” storage,” “data store,” “data storage,” “memory,” “repository,” and substantially any other information storage component relevant to the operation and functionality of a component of the disclosure, refer to “memory components,” entities embodied in a “memory,” or components forming the memory. It can be appreciated that the memory components or memories described herein embody or comprise non-transitory computer storage media that can be readable or otherwise accessible by a computing device. Such media can be implemented in any methods or technology for storage of information such as computer-readable instructions, information structures, program modules, or other information objects. The memory components or memories can be either volatile memory or non-volatile memory, or can include both volatile and non-volatile memory. In addition, the memory components or memories can be removable or non-removable, and/or internal or external to a computing device or component. Examples of various types of non-transitory storage media can include hard-disc drives, zip drives, CD-ROMs, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, flash memory cards or other types of memory cards, cartridges, or any other non-transitory medium suitable to retain the desired information and which can be accessed by a computing device.

As an illustration, non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). The disclosed memory components or memories of the operational or computational environments described herein are intended to include one or more of these and/or any other suitable types of memory.

Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain implementations could include, while other implementations do not include, certain features, elements, and/or operations. Thus, such conditional language generally is not intended to imply that features, elements, and/or operations are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or operations are included or are to be performed in any particular implementation.

What has been described herein in the present specification and annexed drawings includes examples of systems, devices, and techniques that can provide replacement and/or augmentation of automated audio menus with interactive digital menus. It is, of course, not possible to describe every conceivable combination of elements and/or methods for purposes of describing the various features of the disclosure, but it can be recognized that many further combinations and permutations of the disclosed features are possible. Accordingly, it may be apparent that various modifications can be made to the disclosure without departing from the scope or spirit thereof. In addition or in the alternative, other embodiments of the disclosure may be apparent from consideration of the specification and annexed drawings, and practice of the disclosure as presented herein. It is intended that the examples put forward in the specification and annexed drawings be considered, in all respects, as illustrative and not restrictive. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

1. A method, comprising:

calling, by a mobile telephone, an interactive voice response (IVR) system having a telephone number;
communicating the telephone number to a remote server;
receiving, from the remote server during the call to the IVR system, information used to display a digital menu of options for interaction with the IVR system, wherein each of the options is associated with one or more tones corresponding to a numeric code comprising one or more numerals from 0 to 9;
receiving, from the IVR system prior to displaying the digital menu, an audio input signal representative of a portion of an automated audio menu for interaction with the IVR system;
causing a speaker of the mobile telephone to generate audio corresponding to a first portion of the audio input signal;
determining that the digital menu is a valid representation of the automated audio menu by comparing at least a portion of the audio input signal to a reference audio signal;
displaying the digital menu at a display device of the mobile telephone, wherein the displayed digital menu comprises selectable display options, each of the selectable display options corresponding to an audible option in the automated audio menu, and wherein each of the selectable display options comprises a textual representation of the audible option;
receiving a selection from the selectable display options via a touch screen of the mobile telephone prior to a speaker of the mobile telephone generating audio corresponding to a second portion of the audio input signal;
generating numeric code information according to the received selection; and
communicating the numeric code information to the IVR system.

2. The method of claim 1, further comprising parsing the information for a keyword or a phrase associated with a live attendant option to reach at least one of a representative or an operator, and

determining second numeric code information associated with the live attendant option,
wherein displaying the digital menu further comprises highlighting at least one of the selectable display options to represent the live attendant option.

3. The method of claim 1, wherein displaying the digital menu comprises displaying a selectable display option representative of a customized option for the IVR system and a caller at the mobile telephone, and wherein the customized option is representative of historical selections for the caller.

4. The method of claim 1, further comprising displaying indicia representative of a timer indicative of time remaining to transfer the call to a live attendant.

5. The method of claim 1, further comprising:

identifying one or more historical options selected in a previous connection with the automated communication platform; and
generating second code information according to the one or more historical options.

6. A method, comprising:

receiving a request to connect with an automated communication platform having an automated audio menu of options for interaction with the automated communication platform;
sending a communication address of the automated communication platform;
receiving option information indicative of a digital menu of options;
receiving an audio input signal representative of at least one option of the automated audio menu;
causing a speaker of the communication device to generate audio corresponding to at least a portion of the audio input signal, the audio input signal permits determining accuracy of the digital menu;
displaying a plurality of actionable indicia representative of the digital menu, a first actionable indicia of the plurality of actionable indicia corresponding to a first option in the automated audio menu;
receiving a selection of the first actionable indicia;
generating code information according to the selection; and
sending the code information generated tone or sequence of tones to the automated communication platform.

7. The method of claim 6, further comprising parsing the option information for a keyword or a phrase indicative of a live attendant option to reach at least one of a representative or an operator, wherein displaying the plurality of actionable indicia comprises visually emphasizing at least one of the plurality of actionable indicia to represent the live attendant option.

8. The method of claim 6, further comprising displaying indicia representative of a timer indicative of time remaining to transfer the call to a live attendant.

9. The method of claim 6, wherein displaying the plurality of actionable indicia comprises displaying second actionable indicia representative of concatenated options in the digital menu, and wherein the second actionable indicia represents a tree structure of options having a first node associated with a first option and a second node associated with a second option, the second actionable indicia comprising one or more fillable fields to input information associated with the first option.

10. The method of claim 6, wherein displaying the plurality of actionable indicia comprises displaying a second actionable indicia representative of a customized option for the automated communication platform, and wherein the customized option is representative of historical selections.

11. The method of claim 6, wherein determining accuracy of the digital menu comprises receiving a reference audio signal corresponding to an expected audio signal from the automated communication platform,

comparing a sequence of frequencies of the audio input signal with a sequence of frequencies of the reference audio signal, and
determining the accuracy of the digital menu using an outcome of the comparing.

12. The method of claim 6, further comprising identifying one or more historical options selected in a previous connection with the automated communication platform, generating second code information according to the one or more historical options.

13. The method of claim 6, further comprising displaying a form having one or more fields to input information associated with the received selection.

14. A device, comprising:

at least one memory device having instructions encoded thereon;
at least one processor functionally coupled to the at least one memory device and configured, by the instructions, to at least: receive a request to connect with an automated communication platform having an automated audio menu of options for interaction with the automated communication platform; send a communication address of the automated communication platform; receive an audio input signal representative of at least one option of the automated audio menu; cause a speaker of the communication device to generate audio corresponding to at least a portion of the audio input signal; determine, using at least a portion of the audio input signal, that the digital menu is a valid representation of the automated audio menu receive option information indicative of a digital menu of options for interaction with the automated communication platform; display a plurality of actionable indicia representative of the digital menu, a first actionable indicia of the plurality of actionable indicia corresponds to a first audible option in the automated audio menu; receive a selection of one of the plurality of actionable indicia; generate code information according to the received selection; and send the code information to the automated communication platform.

15. The device of claim 14, wherein the at least one processor is further configured to cause the device to parse the option information for a keyword indicative of a live attendant option to reach at least one of a representative or an operator, and

to display distinctively at least one of the plurality of actionable indicia to represent the live attendant option.

16. The device of claim 14, wherein the at least one processor is further configured to display indicia representative of a timer indicative of time remaining to transfer the call to a live attendant.

17. The device of claim 14, wherein the at least one processor is further configured to display a form having one or more fields to input information associated with the received selection.

18. The device of claim 14, wherein the at least one processor is further configured to display an actionable indicia representative of a customized option for the automated communication platform.

19. The device of claim 14, wherein the automated communication platform comprises an interactive voice response system or an automated attendant.

20. The device of claim 14, wherein the at least one processor is further configured to:

identify one or more historical options selected in a previous connection with the automated communication platform; and
generate second code information according to the one or more historical options.
Referenced Cited
U.S. Patent Documents
20130069858 March 21, 2013 O'Sullivan
20130078970 March 28, 2013 Rotsztein et al.
20150023484 January 22, 2015 Ni et al.
20150030140 January 29, 2015 Abel
20150030143 January 29, 2015 Bhogal et al.
Patent History
Patent number: 9270811
Type: Grant
Filed: Sep 23, 2014
Date of Patent: Feb 23, 2016
Assignee: Amazon Technologies, Inc. (Seattle, WA)
Inventor: Samuel Richard Atlas (Seattle, WA)
Primary Examiner: Wayne Cai
Application Number: 14/494,238
Classifications
Current U.S. Class: Display Peripheral Interface Input Device (345/156)
International Classification: H04L 12/58 (20060101); H04M 1/725 (20060101); H04W 4/12 (20090101); H04M 3/51 (20060101);