Call control presence
Included are embodiments for detecting presence in a call control protocol environment. At least one embodiment of a method includes subscribing to events associated with a communication system and receiving a notification for an event, the event being associated with a communication between a first communications device and a second communications device.
Latest Verint Americas, Inc. Patents:
This application is a continuation of U.S. patent application Ser. No. 11/608,440, filed Dec. 8, 2006, which claims the benefit of U.S. Provisional Application No. 60/848,598, filed Sep. 29, 2006 and U.S. Provisional Application No. 60/848,591, filed Sep. 29, 2006, each of which is hereby incorporated herein by reference in its entirety.
TECHNICAL FIELDThis disclosure relates to recording capabilities in a communications network. More specifically, this disclosure relates to recording in a Session Initiation Protocol configured network.
BACKGROUNDAs communication technologies evolve, the functionality provided to users may improve. More specifically, Time Division Multiplexing (TDM) networks have historically provided communications functionality to call centers and customers. However, the introduction of Voice over Internet Protocol (VoIP) networks and similar technologies have provided the opportunity for call centers to provide improved services to customers, recording of incoming and outgoing communications, and monitoring of the recorded communications. While the utilization of VoIP networks has expanded the capabilities for call centers, VoIP may utilize protocols and technologies that have not historically been utilized. As such, limitations may exist within a VoIP network.
As a nonlimiting example, in many current VoIP environments, recording a communication may include passive taps (passive sniffing), and/or bridge connections. In such configurations, one or more parties may not realize that a recording is taking place. Additionally, determining presence data associated with a user of a communications device in an VoIP environment may be unavailable.
SUMMARYIncluded are embodiments for detecting presence in a call control protocol environment. At least one embodiment of a method includes subscribing to events associated with a communication system and receiving a notification for an event, the event being associated with a communication between a first communications device and a second communications device.
Also included are embodiments of a system for detecting presence in a call control protocol environment. At least one embodiment includes a subscribing component configured to subscribe to events associated with a communication system and a receiving component configured to receive a notification for an event, the event being associated with a communication between a first communications device and a second communications device.
Other systems, methods, features, and advantages of this disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description and be within the scope of the present disclosure.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views. While several embodiments are described in connection with these drawings, there is no intent to limit the disclosure to the embodiment or embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
Session Initiation Protocol (SIP) is an application layer control simple signaling protocol for Voice over Internet Protocol (VoIP) implementations. SIP is a textual client-server based protocol that the end user systems and recorder controllers can utilize to provide call forwarding, callee and caller number identification, basic Automatic Call Distribution (ACD) and personal mobility, among others.
SIP addresses may generally take the form of a Uniform Resource Locator (URL). The SIP addresses can be embedded in Web pages and therefore can be integrated as part of implementations such as Click to talk, for example. SIP using simple protocol structure, provides fast operation, flexibility, scalability and multiservice support. SIP provides its own reliability mechanism. SIP creates, modifies and terminates sessions with one or more participants. These sessions include Internet multimedia conferences, Internet telephone calls and multimedia distribution. Members in a session can communicate using multicast or using a mesh of unicast relations, or a combination of these. SIP invitations used to create sessions carry session descriptions, which allow participants to agree on a set of compatible media types.
SIP supports user mobility by proxying and redirecting requests to a user's current location, which may be registered by the user. SIP is not tied to any particular conference control protocol. SIP may be designed to be independent of a low layer transport protocol and can be extended with additional capabilities. SIP transparently supports name mapping and redirection services, allowing the implementation of Integrated Services Digital Network (ISDN) and Intelligent Network telephony subscriber services. These facilities also enable personal mobility that is based on the use of a unique personal identity. SIP supports five facets of establishing and terminating multimedia communications: 1) user location, 2) user capabilities, 3) user availability, 4) call setup, and 5) call handling.
SIP can also initiate multi-party calls using a multipoint control unit (MCU) or fully meshed interconnection instead of multicast. Internet telephony gateways that connect Public Switched Telephone Network (PSTN) parties can also use SIP to set up calls between them. SIP is designed as part of a multimedia data and control architecture currently incorporating protocols, such as Resource Reservation Protocol (RSVP), Real-Time Transport Protocol (RTP), Real-Time Streaming Protocol (RTSP), and Service Advertising Protocol (SAP), among others. However, the functionality and operation of SIP does not depend on any of these protocols. SIP can also be used in conjunction with other call setup and signaling protocols. As such, an end system uses SIP exchanges to determine the appropriate end system address and protocol from a given address that is protocol-independent. As a nonlimiting example, SIP could be used to determine that the party can be reached using H.323, to find the H.245 gateway and user address, and then use H.225.0 to establish the call.
Also coupled to network 100 is a call center 104. Call center 104 may be coupled to network 100 via local network 106, however this is not a requirement. Call center 106 may be configured to provide customer service to users on communications devices 102a, 102b via agents on communications devices 102c, 102d, and/or 102e. Also coupled to local network 106 is a call control server 108, which may be configured to receive a communication and determine where to route the received communication. Call control server 108 may also include other logic for facilitating communications with call center 104.
Also coupled to local network 106 is a recorder controller 110. Recorder controller 110 may be configured to receive a communication and determine a technique for recording the communication. In the nonlimiting example of
Also coupled to local network 106 is a recording center 205. Recording center 205 may be configured as a Session Initiation Protocol (SIP) recording center, as discussed in more detail below and may include one or more recorder controllers 210. Recorder controller 210 may include recording logic, as discussed above and/or may include routing logic for receiving data related to a communication and determining which recorder 212a, 212b, and/or 212c to send the data for recording. Recorders 212a, 212b, and 212c may be configured to record data associated with a communication among communications devices 102.
Additionally coupled to network 100 is a local network 106a, which is coupled to agent communications devices 102c, 102d, and 102e. Network 100 is also coupled to recording center 405, via local network 106b. Local network 106b is coupled to a plurality of recorder controllers 110a, 110b, and a plurality of recorders 212a, 212b, and 212c.
One should note that while the nonlimiting example of
Referring now to
Referring to
Referring to
Referring to
In communicating with recorders (and/or recorder controller), recorder controllers 10 in
The processor 682 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the client device 106, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, or generally any device for executing software instructions.
The volatile and nonvolatile memory 684 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and/or nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory 684 may incorporate electronic, magnetic, optical, and/or other types of storage media. One should note that the volatile and nonvolatile memory 684 can have a distributed architecture (where various components are situated remote from one another), but can be accessed by the processor 682. Additionally volatile and nonvolatile memory 684 can include routing logic 687, recording logic 688, presence logic 699, and an operating system 686.
The software in volatile and nonvolatile memory 684 may include one or more separate programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example of
A system component and/or module embodied as software may also be construed as a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When constructed as a source program, the program is translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the volatile and nonvolatile memory 684, so as to operate properly in connection with the operating system 686.
The Input/Output devices that may be coupled to system I/O Interface(s) 696 may include input devices, for example but not limited to, a keyboard, mouse, scanner, microphone, etc. Further, the Input/Output devices may also include output devices, for example but not limited to, a printer, display, speaker, etc. Finally, the Input/Output devices may further include devices that communicate both as inputs and outputs, for instance but not limited to, a modulator/demodulator (modem; for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a media duplication system, a router, etc.
Additionally included are one or more network interfaces 698 for facilitating communication with one or more other devices. More specifically, network interface 698 may include any component configured to facilitate a connection with another device. While in some embodiments, among others, the recorder controller 210 can include a network interface 698 that includes a Personal Computer Memory Card International Association (PCMCIA) card (also abbreviated as “PC card”) for receiving a wireless network card, however this is a nonlimiting example. Other configurations can include the communications hardware within the computing device, such that a wireless network card is unnecessary for communicating wirelessly. Similarly, other embodiments include network interfaces 698 for communicating via a wired connection. Such interfaces may be configured with Universal Serial Bus (USB) interfaces, serial ports, and/or other interfaces.
If recorder controller 210 includes a personal computer, workstation, or the like, the software in the volatile and nonvolatile memory 684 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of software routines that initialize and test hardware at startup, start the operating system 686, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the client device 106 is activated.
When recorder controller 210 is in operation, the processor 682 may be configured to execute software stored within the volatile and nonvolatile memory 684, to communicate data to and from the volatile and nonvolatile memory 684, and to generally control operations of the client device 106 pursuant to the software. Software in memory, in whole or in part, may be read by the processor 682, perhaps buffered within the processor 682, and then executed.
One should note that while the description with respect to
Similarly, while the discussion with regard to
The embodiments discussed above may refer to a Session Initiation Protocol (SIP) recording center 405 configured to allow a call center 304 (as well as 104, 204, and 404, collectively referred to as 304) to integrate to a flexible integration point between recordable VoIP resources and the and other components associated with the call center 305. In at least one embodiment, calls directed to a call center 304 can be terminated at the call center 304 and recorded by the by the call center 304. This allows the call center 304 to develop add-ons and duplicate the interactions to a recording system. The recording center 305 can be configured to record all or a subset of interactions and prevent the development of error prone integration points for every call center.
The recording center 305 may be configured to act as a generic SIP endpoint, emulating an SIP phone and/or an SIP trunk. One or more components may be present in at least one embodiment of a recording center 305. First, a recorder 212 that terminates Real Time Protocol (RTP) endpoints to receive audio or other media associated with the communication. Second, a recorder controller 110 may be included and may be responsible for controlling the actions of one or more recorders 212. The recorder controller 110 may be configured to direct the recorders 212 to receive media on open ports, instruct the recorders when to start and stop recordings on those ports, and tag the recording information to the recorders 212 (e.g., associate data with the recording). Third, the recorder controller 110 may also responsible for an SIP stack. The stack may receive SIP “INVITEs,” and/or other SIP messages, make internal decisions on whether or not this specific interaction should be recorded, find a recorder which has resources to perform the recording. The stack and control the recording both on the SIP side, as well as the recorder side. Fourth, the recording center 305 may include an application tier that may be configured to provide, at a minimum, applications to search and replay the interactions recorded. An integration framework may also be included and may be responsible for the import of switch data from the call center 304.
One should note that tagging a recording (e.g., associating data with a recording) may be utilized for retrieval and/or replay, such that an administrator (or other party) can find the recording based on the values tagged to the recording. Tagging may be utilized via Calling Line Identifier (CLI), Dialed Number Identification Service (DNIS), queue, agent name, and/or other types of tagging.
Additionally, there may be one or more integration points associated with the call center 304. SIP integration may be a point of integration. SIP integration may be a first integration point that a call center may perform. Depending on the particular configuration, interactions going through the call center 304 may be duplicated since the call center 304 may be configured to provide one or more decision sources for indicating when an interaction is to be recorded. This may be useful, since the call center 304, depending on the particular embodiment, may be encouraged to expose a configuration concerning whether (and/or when) to record a single extension, communication device, group of agents and/or any other business level objects associated with the call center 304.
Another integration point may include an extended tagging integration. This may include a second integration point and may be configured to extend tagging information on the interaction. This integration may be accomplished from the above described integration point and may be configured to provide tagging on integration. More specifically, in at least one embodiment, tagging associated with the “From” party and the “CallID” from the SIP INVITE message may be provided. As described in more detail below, Extended Tagging Integration (ETI) may be configured to provide additional attributes related to the interaction that can be sent to the recording center. In at least one nonlimiting example, the tagging may be sent to the call center, utilizing an “INFO” message. A call center configuration import interface (not shown) may also be included. If attributes are not sent to the call center, an administrators may be required to perform dual maintenance concerning employees and equipment associated with the call center.
The SIP Calls (interactions) between the recording center and the call center 304 may be configured to include one way media traveling from the call center 304 to the recording center. The recording center 305 may be configured to support both recordings with a single stream from the call center and recordings with multiple streams (preferably one stream per party in the initial interaction). In both cases, one or more of the streams may arrive as separate SIP communications.
In the case of a single stream, a single recording SIP communication (interaction) may be setup between the call center 304 and the recording center 305. If there is more than one stream from the call center 304, one or more of the streams may arrive on different SIP Calls (interactions) between the call center 304 and recording center 305. To associate the separate interactions together, a single unique identifier may be utilized for the interactions. This unique identifier may arrive as the value to an “x-callref” tag in the “From” field and/or as an “X-Call-ID” header on the communication. The following examples demonstrate embodiments of a unique identifier:
From “John Doe” <sip:2332@example.com>;
x-callref=12345678 X-Call-ID: 12345678.
In at least one embodiment, the call center may be configured to treat interactions as duplicates of the original interactions and not treat the recording center as a conferenced party. There may be a plurality of reasons for this, one of which may be the desire for the call center to have the ability to implement duplicate media streaming functionality in the handsets, gateways or other systems in the call center.
Additionally, media sessions can be established between the endpoints and the recorders 212 instead of going through central resources, which decreases network utilization. Thus, if the recording session fails, this does not indicate the state of the original interaction should move to a failed state. The recording interaction may be considered as a separate interaction instead of being part of the original interaction since the involved parties should not be aware of the recording session. The recording interaction can be maintained for a specific communications device and/or interaction, even when the interactions themselves are being renegotiated due to a conference, silence suppression, and/or other reason. This means the recording interaction can be setup as a superset of SIP calls between parties.
A recorder can support more concurrent interaction recordings in cases where the default packet size is larger than 20 ms. In at least one embodiment, the parties of the call may notice the quality of the call decreasing when the packet sizes increase. Since the recording center records calls and plays them back at a later time, increasing the packet size may not affect the quality unless there are major network issues. In addition, this may reduce the effect on the network of recording and allow the recorder 212 to handle more concurrent recordings.
Additionally included in this disclosure is an extended tagging integration. The extended tagging integration allows for tagging to be performed on the recording integration. There are some extended SIP headers that may be created to allow this information to be passed from the call center to the recording center. The recording center may be configured to support the following headers:
The X-CallPartyInfo header may be used to convey information about a party in the interaction. The party may be a known (agent or supervisor) or unknown (customer) party. There may be one or more X-CallPartyInfos in an SIP message. A recording interaction with a single mixed stream may include the X-CallPartyInfos for a single SIP Call. A recording interaction with multiple streams (where each stream is treated as a separate SIP Call) may have the X-CallPartyInfos associated with the stream on the specific SIP Call. The value of the X-CallPartyInfo header may contain the display name of the party, the SIP URI (if one exists), and the party's role in the interaction. If the SIP URI is not defined for a party, the SIP URI field will be just “< >”. The role may include one or more of the following values: Caller, Called, Observer, Transferred, Conferenced, Consulted, and/or other formats. The format of the X-CallPartyInfo header may include “X-CallPartyInfo: DisplayName <SIP URI>; tag=value”. The following are nonlimiting examples using the syntax described above:
X-CallPartyInfo: “John Doe” <sip:2332@example.com>; role=Caller
X-CallPartyInfo: <sip:2332@example.com>
X-CallPartyInfo: <sip:2332@example.com>; role=Called
X-CallPartyInfo: “John Doe” <sip:2332@example.com>
X-CallPartyInfo: “John Doe”< >
X-CallPartyInfo: “John Doe” < >; role=Observer.
The X-CallInfo header may be used to send information about the interaction as a whole. This header may include a set of key-value pairs for each known property of the interaction. Some of these attributes may not make sense at the beginning of the call and/or may be changed during the call. The UAC/call center may then send these attributes using the INFO request or the final BYE request. A recording interaction with multiple SIP calls may have this header on a single SIP Call. The preferable call is the first call setup with the RC.
The following keys may also be associated with the communication data. CLI (Caller Line Information) may represent information about the caller including the address (e.g., URI) from where a customer is calling and the customer's name. In a TDM (Timed-Division Multiplexing) environment, this may include the full 10-digit telephone number (in the United Stated). In a true VoIP environment, the CLI may include the “From” header (Display Name and SIP URI) of the original call.
One should note that the above description is included as a nonlimiting example. More specifically, the description above is used to illustrate the sending of CTI and party information related to a call, and is not necessarily limited to the protocol and/or headers described above.
Another key that may be associated with a communication is a DNIS (Dialed Number Identification Service). The DNIS may include the address first dialed to setup the interaction. “Completion Reason” may also be included. Completion Reason may include a code and/or string indicating why the interaction was stopped. “External Interaction ID” is an external value that can be tagged to interactions that are related. This may cover completely separate recording interactions in the recording center. “Route Point” is a queue on a switch or any directory number the interaction traversed before being routed to the current party. A plurality of keys may exist to indicate a plurality of Route Points the interaction traversed.
“Agent Skill” includes the Skill, Hunt Group, and/or Agent Group that is associated with the call center 304. This would give an indication of why the call was routed to the specific party. “Number Of Holds” may include a count of the number of times this interaction was put on hold. “Number Of Transfers” may include a count of the number of times the remote party was transferred in the call center. “Number Of Conferences” may include a count of the number of times the remote party was conferenced at the call center. “Number Of Parties” may include a total number of parties on the call. This may match the number of X-CallPartyInfo recorders received, however this is not a requirement.
“Call Direction” may be configured to indicate the initial direction of the call. Values associated with call direction may include inbound (an interaction that was received at the call center), outbound (an interaction that is initiated at the call center and is sent to another system), and internal (between two parties on the call center). “Held Time” may include the total amount of time the interaction was on hold. “Interaction Duration” may include the total amount of time of the interaction. The call center 304 may also be configured to send in any custom key to tag other data into the recording center 305. The data may be accessible to the recording center for searching, replay, and business rule evaluations. The format of the X-CallInfo header will be as follows: X-CallInfo: key=value; key=value. A nonlimiting example might include: X-CallInfo:
CLI=7705551212; DNIS=8001234567; RoutePoint=9200; RoutePoint=9421; NumberOf Holds=1; HeldTime=67; CallDirection=Inbound; CusomterAccountNumber=87654321; IssueLevel=Severe.
The call control server 108 may then receive copied communication data from the agent communication device 102 (block 740). More specifically, in at least one embodiment, communication device 102 may include logic for duplicating a media stream associated with a communication. During the communication, communications device 102 can copy (and, depending on the embodiment, manipulate) data associated with the communication and send the copied data to call control serve 108r. The copied data may include voice data, video data, and/or other data sent between an agent and a customer. The call control server 108 can then send at least a portion of the received data to a recorder (block 742).
Additionally, as discussed above, the call control server may also be configured to include at least one SIP header with the data sent to a recorder. As such, presence data may be conveyed to an administrator and/or other party. Similarly, some embodiments may be configured such that media may be received from the call control server 508, a gateway, and/or other Voice over Internet Protocol (VoIP) device. Further, some embodiments may be configured such that a gateway may exercise at least a portion of the functionality of the agent communications device.
The agent communications device 102 can create a communications link with a recorder 212 (and/or recorder server 110), as shown in block 838. The agent communications device 102 can then copy communication data associated with the current communication (block 840). The agent communications device 102 can then send the copied data to a recorder (block 842).
As discussed above, the agent communications device 102 can include with the copied data SIP header data to be sent to the recorder 212 (and/or recorder server 110). The header data may be configured to provide data to a recorder, an administrator, and/or other party.
Additionally, as discussed above, upon receiving the copied data, the recorder controller 110 may be configured to determine recorder bandwidth, recorder capabilities, and other data to determine a desired recorder to send the received data. Additionally, in configurations with a plurality of recorder controllers 110, the communications device 102 can determine which recorder controller 110 to send copied data.
Recorder subsystem 414 can then receive a notification associated with a communication (block 1134). Recorder subsystem 414 can then determine whether to record the communication (block 1136). If recorder subsystem determines to record the communication, recorder subsystem can invoke a recording (block 1138).
One should note that utilization of header data, as discussed above can facilitate invocation of a recording. More specifically, while in many nonlimiting examples, a communications device sends a request for a recording, in this nonlimiting example, recorder subsystem 414 actively invokes a recording according to one or more received SIP headers. As such, the recording subsystem 414, recorder controller 110, and/or other components of a recording center 305, may have greater control of recording capabilities.
Recorder controller 108 may extract tagging information associated with the presence data (block 1236). Call control server 108 can set up a communications link between a caller and an agent (block 1238). Call control server 108 can then route the received call to the agent (block 1240). A communications device 102 associated with the agent then sends an invite to recorder controller 108 (block 1242). Recorder controller 110 can then determine current recording resources (block 1244). Recorder controller 110 may determine a destination for the communications data (block 1246). Agent communications device 102 then sends data to the determined destination (block 1248).
One should note that, in at least one exemplary embodiment determining whether the record data associated with a communication may depend, at least in part, on business rules of the customer center, customer, and/or other entity. As a nonlimiting example, a business rule may include a rule to record all calls, record calls longer than a predetermined duration, shorter than a predetermination, calls between certain parties, and/or other business rules.
Similarly, embodiments of the process described in
Additionally, a customer center may include, but is not limited to, outsourced contact centers, outsourced customer relationship management, customer relationship management, voice of the customer, customer interaction, contact center, multi-media contact center, remote office, distributed enterprise, work-at-home agents, remote agents, branch office, back office, performance optimization, workforce optimization, hosted contact centers, and speech analytics, for example.
Additionally included in this disclosure are embodiments of integrated workforce optimization platforms, as discussed in U.S. application Ser. No. 11/359,356, filed on Feb. 22, 2006, entitled “Systems and Methods for Workforce Optimization,” which is hereby incorporated by reference in its entirety. At least one embodiment of an integrated workforce optimization platform integrates: (1) Quality Monitoring/Call Recording—voice of the customer; the complete customer experience across multimedia touch points; (2) Workforce Management—strategic forecasting and scheduling that drives efficiency and adherence, aids in planning, and helps facilitate optimum staffing and service levels; (3) Performance Management—key performance indicators (KPIs) and scorecards that analyze and help identify synergies, opportunities and improvement areas; (4) e-Learning—training, new information and protocol disseminated to staff, leveraging best practice customer interactions and delivering learning to support development; and/or (5) Analytics—deliver insights from customer interactions to drive business performance. By way of example, the integrated workforce optimization process and system can include planning and establishing goals—from both an enterprise and center perspective—to ensure alignment and objectives that complement and support one another. Such planning may be complemented with forecasting and scheduling of the workforce to ensure optimum service levels. Recording and measuring performance may also be utilized, leveraging quality monitoring/call recording to assess service quality and the customer experience.
The embodiments disclosed herein can be implemented in hardware, software, firmware, or a combination thereof. At least one embodiment, disclosed herein is implemented in software and/or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiment embodiments disclosed herein can be implemented with any or a combination of the following technologies: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
One should note that the flowcharts included herein show the architecture, functionality, and operation of a possible implementation of software. In this regard, each block can be interpreted to represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order and/or not at all. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
One should note that any of the programs listed herein, which can include an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer-readable medium could include an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of the certain embodiments of this disclosure can include embodying the functionality described in logic embodied in hardware or software-configured mediums.
One should also note that conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more particular embodiments or that one or more particular embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of this disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure.
Claims
1. A method for detecting presence in a Voice over Internet Protocol communications system environment, comprising:
- subscribing to receive an indication related to events associated with a communication system, the events being subscribed to by a recorder controller;
- receiving a notification for an event, the event being associated with a communication between a first communications device and a second communications device;
- extracting tagging data associated with presence data from the communication between the first device and the second communications device, the tagging data including information related to an interaction invite and an interaction response;
- placing the tagging data into an extended Session Initiation Protocol (SIP) header of the communication; and
- recording the portion of the communication having the extended Session Initiation Protocol (SIP) header based on information contained in the extended Session Initiation Protocol (SIP) header of the portion of the communication and the notification.
2. The method of claim 1, wherein subscribing to events associated with a communications system includes subscribing to events associated with a communications device.
3. The method of claim 1, further comprising, in response to determining to invoke a recording of the interaction, invoking a recording of the interaction.
4. The method of claim 1, wherein the notification includes presence data associated with the first communication device.
5. The method of claim 1, wherein the notification includes presence data associated with the first communications device and the second communications device.
6. The method of claim 1, wherein invoking a recording includes determining at least one recorder for recording by the recorder controller.
7. A system for detecting presence in a Voice over Internet Protocol call control protocol environment, comprising:
- a subscribing component configured to subscribe to events associated with a communication device;
- a receiving component configured to receive a notification for an event, the event being associated with a communication between a first communications device and a second communications device and the notification;
- an extraction component that extracts tagging data associated with presence data from the communication between the first device and the second communications device, the extraction including information related to an interaction invite and an interaction response, receiving component further placing the tagging data into a Session Initiation Protocol (SIP) header of the communication; and
- a recording component that receives a notification of the event from the second communications device, the recording being performed based on the information contained in the Session Initiation Protocol (SIP) header of the communication and the notification.
8. The system of claim 7, wherein the determining component is configured to determine whether to invoke a recording of the event based on at least one business rule.
9. The system of claim 7, further comprising an invoking component configured to, in response to determining to invoke a recording of the event, invoke a recording of the event.
10. The system of claim 7, wherein the subscribing component is further configured to subscribe to events associated with a communications device.
11. The system of claim 7, wherein the notification includes presence data associated with the first communication device.
12. The system of claim 7, wherein the notification includes presence data associated with the first communications device and the second communications device.
13. The system of claim 7, wherein the notification is received in a Session Initiation Protocol (SIP) format.
14. The system of claim 7, wherein the invoking component is further configured to determine at least one recorder for recording.
3594919 | July 1971 | De Bell et al. |
3705271 | December 1972 | De Bell et al. |
4510351 | April 9, 1985 | Costello et al. |
4684349 | August 4, 1987 | Ferguson et al. |
4694483 | September 15, 1987 | Cheung |
4763353 | August 9, 1988 | Canale et al. |
4815120 | March 21, 1989 | Kosich |
4924488 | May 8, 1990 | Kosich |
4953159 | August 28, 1990 | Hayden et al. |
5016272 | May 14, 1991 | Stubbs et al. |
5101402 | March 31, 1992 | Chiu et al. |
5117225 | May 26, 1992 | Wang |
5210789 | May 11, 1993 | Jeffus et al. |
5239460 | August 24, 1993 | LaRoche |
5241625 | August 31, 1993 | Epard et al. |
5267865 | December 7, 1993 | Lee et al. |
5299260 | March 29, 1994 | Shaio |
5311422 | May 10, 1994 | Loftin et al. |
5315711 | May 1994 | Barone et al. |
5317628 | May 31, 1994 | Misholi et al. |
5347306 | September 13, 1994 | Nitta |
5388252 | February 7, 1995 | Dreste et al. |
5396371 | March 7, 1995 | Henits et al. |
5432715 | July 11, 1995 | Shigematsu et al. |
5465286 | November 7, 1995 | Clare et al. |
5475625 | December 12, 1995 | Glaschick |
5485569 | January 16, 1996 | Goldman et al. |
5491780 | February 13, 1996 | Fyles et al. |
5499291 | March 12, 1996 | Kepley |
5535256 | July 9, 1996 | Maloney et al. |
5572652 | November 5, 1996 | Robusto et al. |
5577112 | November 19, 1996 | Cambray et al. |
5590171 | December 31, 1996 | Howe et al. |
5597312 | January 28, 1997 | Bloom et al. |
5619183 | April 8, 1997 | Ziegra et al. |
5696906 | December 9, 1997 | Peters et al. |
5717879 | February 10, 1998 | Moran et al. |
5721842 | February 24, 1998 | Beasley et al. |
5742670 | April 21, 1998 | Bennett |
5748499 | May 5, 1998 | Trueblood |
5778182 | July 7, 1998 | Cathey et al. |
5784452 | July 21, 1998 | Carney |
5790798 | August 4, 1998 | Beckett, II et al. |
5796952 | August 18, 1998 | Davis et al. |
5809247 | September 15, 1998 | Richardson et al. |
5809250 | September 15, 1998 | Kisor |
5825869 | October 20, 1998 | Brooks et al. |
5835572 | November 10, 1998 | Richardson, Jr. et al. |
5862330 | January 19, 1999 | Anupam et al. |
5864772 | January 26, 1999 | Alvarado et al. |
5884032 | March 16, 1999 | Bateman et al. |
5907680 | May 25, 1999 | Nielsen |
5918214 | June 29, 1999 | Perkowski |
5923746 | July 13, 1999 | Baker et al. |
5933811 | August 3, 1999 | Angles et al. |
5944791 | August 31, 1999 | Scherpbier |
5946375 | August 31, 1999 | Pattison et al. |
5948061 | September 7, 1999 | Merriman et al. |
5958016 | September 28, 1999 | Chang et al. |
5964836 | October 12, 1999 | Rowe et al. |
5978648 | November 2, 1999 | George et al. |
5982857 | November 9, 1999 | Brady |
5987466 | November 16, 1999 | Greer et al. |
5990852 | November 23, 1999 | Szamrej |
5991373 | November 23, 1999 | Pattison et al. |
5991796 | November 23, 1999 | Anupam et al. |
6005932 | December 21, 1999 | Bloom |
6009429 | December 28, 1999 | Greer et al. |
6014134 | January 11, 2000 | Bell et al. |
6014647 | January 11, 2000 | Nizzari et al. |
6018619 | January 25, 2000 | Allard et al. |
6035332 | March 7, 2000 | Ingrassia et al. |
6038544 | March 14, 2000 | Machin et al. |
6039575 | March 21, 2000 | L'Allier et al. |
6057841 | May 2, 2000 | Thurlow et al. |
6058163 | May 2, 2000 | Pattison et al. |
6061798 | May 9, 2000 | Coley et al. |
6072860 | June 6, 2000 | Kek et al. |
6076099 | June 13, 2000 | Chen et al. |
6078894 | June 20, 2000 | Clawson et al. |
6091712 | July 18, 2000 | Pope et al. |
6108711 | August 22, 2000 | Beck et al. |
6122665 | September 19, 2000 | Bar et al. |
6122668 | September 19, 2000 | Teng et al. |
6130668 | October 10, 2000 | Stein |
6138139 | October 24, 2000 | Beck et al. |
6144991 | November 7, 2000 | England |
6146148 | November 14, 2000 | Stuppy |
6151622 | November 21, 2000 | Fraenkel et al. |
6154771 | November 28, 2000 | Rangan et al. |
6157808 | December 5, 2000 | Hollingsworth |
6171109 | January 9, 2001 | Ohsuga |
6182094 | January 30, 2001 | Humpleman et al. |
6195679 | February 27, 2001 | Bauersfeld et al. |
6201948 | March 13, 2001 | Cook et al. |
6211451 | April 3, 2001 | Tohgi et al. |
6225993 | May 1, 2001 | Lindblad et al. |
6230197 | May 8, 2001 | Beck et al. |
6236977 | May 22, 2001 | Verba et al. |
6244758 | June 12, 2001 | Solymar et al. |
6282548 | August 28, 2001 | Burner et al. |
6286030 | September 4, 2001 | Wenig et al. |
6286046 | September 4, 2001 | Bryant |
6288753 | September 11, 2001 | DeNicola et al. |
6289340 | September 11, 2001 | Puram et al. |
6301462 | October 9, 2001 | Freeman et al. |
6301573 | October 9, 2001 | McIlwaine et al. |
6324282 | November 27, 2001 | McIlwaine et al. |
6347374 | February 12, 2002 | Drake et al. |
6351467 | February 26, 2002 | Dillon |
6353851 | March 5, 2002 | Anupam et al. |
6360250 | March 19, 2002 | Anupam et al. |
6370574 | April 9, 2002 | House et al. |
6404857 | June 11, 2002 | Blair et al. |
6411989 | June 25, 2002 | Anupam et al. |
6418471 | July 9, 2002 | Shelton et al. |
6459787 | October 1, 2002 | McIlwaine et al. |
6487195 | November 26, 2002 | Choung et al. |
6493758 | December 10, 2002 | McLain |
6502131 | December 31, 2002 | Vaid et al. |
6510220 | January 21, 2003 | Beckett, II et al. |
6535909 | March 18, 2003 | Rust |
6542602 | April 1, 2003 | Elazar |
6546405 | April 8, 2003 | Gupta et al. |
6560328 | May 6, 2003 | Bondarenko et al. |
6583806 | June 24, 2003 | Ludwig et al. |
6606657 | August 12, 2003 | Zilberstein et al. |
6665644 | December 16, 2003 | Kanevsky et al. |
6674447 | January 6, 2004 | Chiang et al. |
6683633 | January 27, 2004 | Holtzblatt et al. |
6697858 | February 24, 2004 | Ezerzer et al. |
6724887 | April 20, 2004 | Eilbacher et al. |
6738456 | May 18, 2004 | Wrona et al. |
6757361 | June 29, 2004 | Blair et al. |
6772396 | August 3, 2004 | Cronin et al. |
6775377 | August 10, 2004 | McIlwaine et al. |
6792575 | September 14, 2004 | Samaniego et al. |
6810414 | October 26, 2004 | Brittain |
6820083 | November 16, 2004 | Nagy et al. |
6823384 | November 23, 2004 | Wilson et al. |
6870916 | March 22, 2005 | Henrikson et al. |
6901438 | May 31, 2005 | Davis et al. |
6959078 | October 25, 2005 | Eilbacher et al. |
6965886 | November 15, 2005 | Govrin et al. |
20010000962 | May 10, 2001 | Rajan |
20010032335 | October 18, 2001 | Jones |
20010033644 | October 25, 2001 | Offer |
20010043697 | November 22, 2001 | Cox et al. |
20020038363 | March 28, 2002 | MacLean |
20020052948 | May 2, 2002 | Baudu et al. |
20020065911 | May 30, 2002 | Von Klopp et al. |
20020065912 | May 30, 2002 | Catchpole et al. |
20020075880 | June 20, 2002 | Dolinar et al. |
20020128925 | September 12, 2002 | Angeles |
20020143925 | October 3, 2002 | Pricer et al. |
20020165954 | November 7, 2002 | Eshghi et al. |
20030055883 | March 20, 2003 | Wiles et al. |
20030079020 | April 24, 2003 | Gourraud et al. |
20030144900 | July 31, 2003 | Whitmer |
20030154240 | August 14, 2003 | Nygren et al. |
20040100507 | May 27, 2004 | Hayner et al. |
20040165717 | August 26, 2004 | McIlwaine et al. |
20040235509 | November 25, 2004 | Burritt et al. |
20050135340 | June 23, 2005 | Lee et al. |
20050220039 | October 6, 2005 | Hoshino et al. |
20060242232 | October 26, 2006 | Murillo et al. |
20070036283 | February 15, 2007 | Shaffer et al. |
20070217589 | September 20, 2007 | Martin et al. |
0453128 | October 1991 | EP |
0773687 | May 1997 | EP |
0989720 | March 2000 | EP |
2369263 | May 2002 | GB |
WO98/43380 | November 1998 | WO |
WO00/16207 | March 2000 | WO |
- Kane, AOL-Tivo: You've Got Interactive TV, ZDNN, Aug. 17, 1999.
- Kay, “E-Mail in Your Kitchen”, PC World Online, Mar. 28, 1996.
- Kenny, “Tv Meets Internet”, PC World Online, Mar. 28, 1996.
- Linderholm, “Avatar Debuts Home Theater PC”, PC World Online, Dec. 1, 1999.
- Mendoza, “Order Pizza While You Watch”, ABCNews.com.
- Moody, “WebTV: What the Big Deal?”, ABCNews.com.
- Murdorf et al., “Interactive Television—Is There Life After the Internet?”, Interactive TV News.
- Needle, “PC, TV or Both?”, PC World Online.
- Interview with Steve Perlman, CEO of Web-TV Networks, PC World Online.
- Press, Two Cultures, The Internet and Interactive TV, Universite de Montreal.
- Reuters, “Will TV Take Over Your PC?”, PC World Online.
- Rohde, “Gates Touts Interactive TV”, InfoWorld, Oct. 14, 1999.
- Ross, “Broadcasters Use TV Signals to Send Data”, PC World, Oct. 1996.
- Schlisserman, “Is Web TV a Lethal Weapon?”, PC World Online.
- Stewart, “Interactive Television at Home: Television Meets the Internet”, Aug. 1998.
- Swedlow, “Computer TV Shows: Ready for Prime Time?”, PC World Online.
- Wilson, “U.S. West Revisits Interactive TV”, Interactive Week, Nov. 28, 1999.
- Klein, “Command Decision Training Support Technology,” Web page, unverified print date of Apr. 12, 2002.
- “Customer Spotlight: Navistar International,” Web page, unverified print date of Apr. 1, 2002.
- DKSystems Integrates QM Perception with OnTrack for Training, Web page, unverified print date of Apr. 1, 2002, unverified cover date of Jun. 15, 1999.
- “OnTrack Online” Delivers New Web Functionality, Web page, unverified print date of Apr. 2, 2002, unverified cover date of Oct. 5, 1999.
- “Price Waterhouse Coopers Case Study: The Business Challenge,” Web page, unverified cover date of 2000.
- Abstract, net.working: “An Online Webliography,” Technical Training pp. 4-5 (Nov./Dec. 1998).
- Adams et al., “Our Turn-of-the-Century Trend Watch” Technical Training, pp. 46-47 (Nov./Dec. 1998).
- Barron, “The Road to Performance: Three Vignettes,” Technical Skills and Training, pp. 12-14 (Jan. 1997).
- Bauer, “Technology Tools: Just-in-Time Desktop Training is Quick, Easy, and Affordable,” Technical Training, pp. 8-11 (May/Jun. 1998).
- Beck et al., “Applications of AI in Education,” AMC Crossroads vol. 1:1-13 (Fall 1996), Web page, unverified print date of Apr. 12, 2002.
- Benson and Cheney, “Best Practices in Training Delivery,” Technical Training pp. 14-17 (Oct. 1996).
- Bental and Cawsey, “Personalized and Adaptive Systems for Medical Consumer Applications,” Communications ACM 45(5):62-63 (May 2002).
- Witness Systems promotional brochure for eQuality entitled “Building Customer Loyalty Through Business-Driven Recording of Multimedia Interactions in your Contact Center” (2000).
- Benyon and Murray, “Adaptive Systems: from intelligent tutoring to autonomous agents,” pp. 1-52, Web page, unknown date.
- Blumenthal et al., “Reducing Development Costs with Intelligent Tutoring System Shells,” pp. 1-5, Web page, unverified print date of Apr. 9, 2002, unverified cover date of Jun. 10, 1996.
- Brusilovsky et al., “Distributed intelligent tutoring on the Web,” Proceedings of the 8th World Conference of the AIED Society, Kobe, Japan, Aug. 18-22, pp. 1-9 Web page, unverified print date of Apr. 12, 2002, unverified cover date of Aug. 18-22, 1997.
- Brusilovsky and Pesin, ISIS-Tutor: An Intelligent Learning Environment for CD/ISIS Users, @pp. 1-15 Web page, unverified print date of May 2, 2002.
- Brusilovsky, “Adaptive Educational Systems on the World-Wide-Web: A Review of Available Technologies,” pp. 1-10, Web Page, unverified print date of Apr. 12, 2002.
- Byrnes et al., “The Development of a Multiple-Choice and True-False Testing Environment on the Web,” pp. 1-8, Web page, unverified print date Apr. 12, 2002, unverified cover date of 1995.
- Calvi and De Bra, “Improving the Usability of Hypertext Courseware through Adaptive Linking,” ACM, unknown page numbers (1997).
- Coffey, “Are Performance Objectives Really Necessary?” Technical Skills and Training pp. 25-27 (Oct. 1995).
- Cohen, “Knowledge Management's Killer App,” pp. 1-11, Web page, unverified print date of Apr. 12, 2002, unverified cover date of 2001.
- Cole-Gomolski, “New ways to manage E-Classes,” Computerworld 32(48):4344 (Nov. 30, 1998).
- Cross, “Sun Microsystems—The SunTAN Story,” Internet Time Group 8 (2001).
- Cybulski and Linden, “Teaching Systems Analysis and Design Using Multimedia and Patterns,” unknown date, unknown source.
- De Bra et al., “Adaptive Hypermedia: From Systems to Framework,” ACM (2000).
- De Bra, “Adaptive Educational Hypermedia on the Web,” Communications ACM 45(5):60-61 (May 2002).
- Dennis and Gruner, “Computer Managed Instruction at Arthur Andersen & Company: A Status Report,” Educational Technical, pp. 7-16 (Mar. 1992).
- Diessel et al., “Individualized Course Generation: A Marriage Between CAL and ICAL,” Computers Educational 22(1/2) 57-64 (1994).
- Dyreson, “An Experiment in Class Management Using the World-Wide Web,” pp. 1-12, Web page, unverified print date of Apr. 12, 2002.
- E Learning Community, “Excellence in Practice Award: Electronic Learning Technologies,” Personal Learning Network pp. 1-11, Web page, unverified print date of Apr. 12, 2002.
- Eklund and Brusilovsky, “The Value of Adaptivity in Hypermedia Learning Environments: A Short Review of Empirical Evidence,” pp. 1-8, Web page, unverified print date of May 2, 2002.
- e-Learning the future of learning, THINQ Limited, London, Version 1.0 (2000).
- Eline, “A Trainer's Guide to Skill Building,” Technical Training pp. 34-41 (Sep./Oct. 1998).
- Eline, “Case Study: Bridging the Gap in Canada's IT Skills,” Technical Skills and Training pp. 23-25 (Jul. 1997).
- Eline, “Case Study: IBT's Place in the Sun,” Technical Training pp. 12-17 (Aug./Sep. 1997).
- Fritz, “CB templates for productivity: Authoring system templates for trainers,” Emedia Professional 10(8):6876 (Aug. 1997).
- Fritz, “ToolBook II: Asymetrix's updated authoring software tackles the Web,” Emedia Professional 10(2):102106 (Feb. 1997).
- Gibson et al., “A Comparative Analysis of Web-Based Testing and Evaluation Systems,” pp. 1-8, Web page, unverified print date of Apr. 11, 2002.
- Hallberg and DeFiore, “Curving Toward Performance: Following a Hierarchy of Steps Toward a Performance Orientation,” Technical Skills and Training pp. 9-11 (Jan. 1997).
- Harsha, “Online Training “Sprints” Ahead,” Technical Training pp. 27-29 (Jan./Feb. 1999).
- Heideman, “Training Technicians for a High-Tech Future: These six steps can help develop technician training for high-tech work,” pp. 11-14 (Feb./Mar. 1995).
- Heideman, “Writing Performance Objectives Simple as A-B-C (and D),” Technical Skills and Training pp. 5-7 (May/Jun. 1996).
- Hollman, “Train Without Pain: The Benefits of Computer-Based Training Tools,” pp. 1-11, Web page, unverified print date of Mar. 20, 2002, unverified cover date of Jan. 1, 2000.
- Koonce, “Where Technology and Training Meet,” Technical Training pp. 10-15 (Nov./Dec. 1998).
- Kursh, “Going the distance with Web-based training,” Training and Development 52(3):5053 (Mar. 1998).
- Larson, “Enhancing Performance Through Customized Online Learning Support,” Technical Skills and Training pp. 25-27 (May/Jun. 1997).
- Linton et al., “OWL: A Recommender System for Organization-Wide Learning,” Educational Technical Society 3 (1):62-76 (2000).
- Lucadamo and Cheney, “Best Practices in Technical Training,” Technical Training pp. 21-26 (Oct. 1997).
- McNamara, “Monitoring Solutions: Quality Must Be Seen and Heard,” Inbound/Outbound pp. 66-67 (Dec. 1989).
- Merrill, “The New Component Design Theory: Instruction design for courseware authoring,” Instructional Science 16:19-34 (1987).
- Minton-Eversole, “IBT Training Truths Behind the Hype,” Technical Skills and Training pp. 15-19 (Jan. 1997).
- Mizoguchi, “Intelligent Tutoring Systems: The Current State of the Art,” Trans. IEICE E73(3):297-307 (Mar. 1990).
- Mostow and Aist, “The Sounds of Silence: Towards Automated Evaluation of Student Learning a Reading Tutor that Listens” American Association for Artificial Intelligence, Web page, unknown date Aug. 1997.
- Mullier et al., “A Web base Intelligent Tutoring System,” pp. 1-6, Web page, unverified print date of May. 2, 2002.
- Nash, Database Marketing, 1993, pp. 158-165, 172-185, McGraw Hill, Inc., USA.
- Nelson et al., “The Assessment of End-User Training Needs,” Communications ACM 38(7):27-39 (Jul. 1995).
- O'Herron, “CenterForce Technologies' CenterForce Analyzer,” Web page, unverified print date of Mar. 20, 2002, unverified cover date of Jun. 1, 1999.
- O'Roark, “Basic Skills Get a Boost,” Technical Training pp. 10-13 (Jul./Aug. 1998).
- Pamphlet, “On Evaluating Educational Innovations,” authored by Alan Lesgold, unverified cover date of Mar. 5, 1998.
- Papa et al., “A Differential Diagnostic Skills Assessment and Tutorial Tool,” Computer Education 18(1-3):45-50 (1992).
- PCT International Search Report, International Application No. PCT/US03/02541, mailed May 12, 2003.
- Phaup, “New Software Puts Computerized Tests on the Internet: Presence Corporation announces breakthrough Question Mark Web product,” Web page, unverified print date of Apr. 1, 2002.
- Phaup, “QM Perception Links with Integrity Training's WBT Manager to Provide Enhanced Assessments for Web-Based Courses,” Web page, unverified print date of Apr. 1, 2002, unverified cover date of Mar. 25, 1999.
- Phaup, “Question Mark Introduces Access Export Software,” Web page, unverified print date of Apr. 2, 2002, unverified cover date of Mar. 1, 1997.
- Phaup, “Question Mark Offers Instant Online Feedback for Web Quizzes and Questionnaires: University of California assist with Beta Testing, Server scripts now available to high-volume users,” Web page, unverified print date of Apr. 1, 2002, unverified cover date of May 6, 1996.
- Piskurich, “Now-You-See-'Em, Now-You-Don't Learning Centers,” Technical Training pp. 18-21 (Jan./Feb. 1999).
- Read, “Sharpening Agents' Skills,” pp. 1-15, Web page, unverified print date of Mar. 20, 2002, unverified cover date of Oct. 1, 1999.
- Reid, “On Target: Assessing Technical Skills,” Technical Skills and Training pp. 6-8 (May/Jun. 1995).
- Stormes, “Case Study: Restructuring Technical Training Using ISD,” Technical Skills and Training pp. 23-26 (Feb./Mar. 1997).
- Tennyson, “Artificial Intelligence Methods in Computer-Based Instructional Design,” Journal of Instruction Development 7(3):17-22 (1984).
- The Editors, Call Center, “The Most Innovative Call Center Products We Saw in 1999,” Web page, unverified print date of Mar. 20, 2002, unverified cover date of Feb. 1, 2000.
- Tinoco et al., “Online Evaluation in WWW-based Courseware,” ACM pp. 194-198 (1997).
- Uiterwijk et al., “The virtual classroom,” InfoWorld 20(47):6467 (Nov. 23, 1998).
- Unknown Author, “Long-distance learning,” InfoWorld 20(36):7276 (1998).
- Untitled, 10th Mediterranean Electrotechnical Conference vol. 1 pp. 124-126 (2000).
- Watson and Belland, “Use of Learner Data in Selecting Instructional Content for Continuing Education,” Journal of Instructional Development 8(4):29-33 (1985).
- Weinschenk, “Performance Specifications as Change Agents,” Technical Training pp. 12-15 (Oct. 1997).
- Witness Systems promotional brochure for eQuality entitled “Bringing eQuality to eBusiness.”.
- Aspect Call Center Product Specification, “Release 2.0”, Aspect Telecommunications Corporation, May 23, 1998, 798.
- Metheus X Window Record and Playback, XRP Features and Benefits, 2 pages, Sep. 1994 LPRs.
- “Keeping an Eye on Your Agents,” Call Center Magazine, pp. 32-34, Feb. 1993 LPRs & 798.
- Anderson: Interactive TVs New Approach, The Standard, Oct. 1, 1999.
- Ante, “Everything You Ever Wanted to Know About Cryptography Legislation . . . (But Were too Sensible to Ask)”, PC World Online, Dec. 14, 1999.
- Berst, “It's Baa-aack. How Interactive TV is Sneaking Into Your Living Room”, The AnchorDesk, May 10, 1999.
- Berst, “Why Interactive TV Won't Turn You On (Yet)”, The AnchorDesk, Jul. 13, 1999.
- Borland and Davis, “US West Plans Web Services on TV”, CNETNews.com, Nov. 22, 1999.
- Brown, “Let PC Technology Be Your TV Guide”, PC Magazine, Jun. 7, 1999.
- Brown, “Interactive TV: The Sequel”, NewMedia, Feb. 10, 1999.
- Cline, “Deja vu—Will Interactive TV Make It This Time Around?”, DevHead, Jul. 9, 1999.
- Crouch, “TV Channels on the Web”, PC World, Sep. 15, 1999.
- D'Amico, “Interactive TV Gets $99 set-top box”, IDG.net, Oct. 6, 1999.
- Davis, “Satellite Systems Gear Up for Interactive TV Fight”, CNETNews.com, Sep. 30, 1999.
- Diederich, “Web TV Data Gathering Raises Privacy Concerns”, ComputerWorld, Oct. 13, 1998.
- Digital Broadcasting, Interactive TV News.
- EchoStar, “MediaX Mix Interactive Multimedia With Interactive Television”, PRNews Wire, Jan. 11, 1999.
- Furger, “The Internet Meets the Couch Potato”, PCWorld, Oct. 1996.
- “Hong Kong Comes First with Interactive TV”, SCI-TECH, Dec. 4, 1997.
- “Interactive TV Overview TimeLine”, Interactive TV News.
- “Interactive TV Wars Heat Up”, Industry Standard.
- Needle, “Will The Net Kill Network TV?” PC World Online, Mar. 10, 1999.
Type: Grant
Filed: Apr 28, 2010
Date of Patent: Mar 20, 2012
Assignee: Verint Americas, Inc. (Roswell, GA)
Inventors: Marc Calahan (Woodstock, GA), Jamie Richard Williams (Fleet), Robert John Barnes (Watford Herts)
Primary Examiner: William Deane, Jr.
Attorney: McKeon, Meunier, Carlin & Curfman
Application Number: 12/769,544
International Classification: H04M 3/42 (20060101); H04L 12/66 (20060101);