CALENDARING ACTIVITIES BASED ON COMMUNICATION PROCESSING

A method is provided in one embodiment and includes establishing a communication session involving a first endpoint and a second endpoint that are associated with a session, the first endpoint being associated with a first identifier and the second endpoint being associated with a second identifier. The method also includes evaluating first data for the first endpoint; evaluating second data for the second point; and determining whether to initiate a calendaring activity based, at least in part, on the first data and the second data. In more specific embodiments, the method includes evaluating a first availability associated with the first endpoint; evaluating a second availability associated with the second endpoint; and suggesting a future meeting based, at least in part, on the a first availability and the second availability.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 13/897,186, filed May 17, 2013, the entirety of which is incorporated herein by reference.

TECHNICAL FIELD

This disclosure relates in general to the field of communications and, more particularly, to calendaring activities based on communication processing.

BACKGROUND

Communication services have become increasingly important in today's society. In certain architectures, service providers may seek to offer sophisticated conferencing services for their end users. The conferencing architecture can offer an “in-person” meeting experience over a network. Conferencing architectures can seek to deliver real-time, face-to-face interactions between people using advanced visual, audio, and collaboration technologies.

In many communications scenarios, participants organize future meetings, follow-up items, etc., although frequently participants forget to setup the meeting, or follow through with these plans. In other cases, participants may make mistakes in the agreed time and place when setting up the meeting. In yet other instances, a participant may become aware of facts (e.g., a deadline, a birthdate, an anniversary, etc.) during the session. The participant may intend for these dates/events to be saved for future reference, but ultimately forget to properly record them. Finding these items (e.g., using search tools) may not be possible depending on the mode of communication or, alternatively, finding these items may simply be cumbersome.

BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:

FIG. 1 is a simplified schematic diagram illustrating a communication system for providing calendaring activities in accordance with one embodiment of the present disclosure;

FIG. 2 is a simplified block diagram illustrating one possible set of example implementation details associated with one embodiment of the present disclosure; and

FIG. 3 is a simplified flowchart illustrating example operations associated with one embodiment of the communication system.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS Overview

A method is provided in one embodiment and includes establishing a communication session involving a first endpoint and a second endpoint that are associated with a session, the first endpoint being associated with a first identifier and the second endpoint being associated with a second identifier. The method also includes evaluating first data for the first endpoint; evaluating second data for the second point; and determining whether to initiate a calendaring activity based, at least in part, on the first data and the second data. The ‘first data’ and the ‘second data’ can comprise any part of an exchange between two parties (e.g., a portion of a conversation). For example, such data can include audio data, video data, text, multi-media data, instant messaging data, graphics, pictures, email data, etc.

In more specific embodiments, the method includes evaluating a first availability associated with the first endpoint; evaluating a second availability associated with the second endpoint; and suggesting a future meeting based, at least in part, on the first availability and the second availability. The method can also include performing a speech to text analysis on at least a portion of the session in order to determine whether to initiate the calendaring activity. The method could also include assigning one or more follow-up items to a third endpoint that was not initially involved in the session, where the assigning is based, at least in part, on at least a portion of the session. In certain implementations, the method can include providing a pop-up window to offer a list of calendar options to the first endpoint and the second endpoint based, at least in part, on a portion of the session. In yet other examples, heuristics (e.g., histories of any kind) are used in order to determine whether to initiate the calendaring activity. Additionally, previously defined speech patterns can be used in order to determine whether to initiate the calendaring activity.

EXAMPLE EMBODIMENTS

Turning to FIG. 1, FIG. 1 is a simplified schematic diagram illustrating a communication system 10 for providing calendaring activities in accordance with one example embodiment of the present disclosure. Communication system 10 can be configured to use language processing in audio, video, or text communication to automate the process of calendaring any number of items (e.g., meetings, reminders, tasks associated with a date or a time, etc.). This can provide for an automated meeting/resource scheduling tool that could be associated with any suitable technology (e.g., a Telepresence™ session, a WebEx™ meeting, instant messaging sessions, etc.). In certain scenarios, the architecture of FIG. 1 can provide a real-time determination of available times for attendees to participate in a future meeting.

Language processing can be used within a given session to identify key aspects of the dialogue and, subsequently, automatically generate meetings, reminders, events, task lists (to-do lists), meeting discussion topics, etc. The term “calendaring activities” generically compasses all such possibilities. Note that the automatic calendar activities can also apply to facts that surfaced during a conversation such as addresses, follow-up items, etc. all of which could be intelligently assigned to one or more contacts (either within or outside of the original session). The actual processing could be done in real-time (e.g., using a simple pop-up window), or provisioned at a more convenient time (e.g., at the end of the session in order to minimize interruptions during the meeting). In the case of former that involves real-time scheduling, this could obviate the need to later check availability amongst participants for follow-up meetings. Additionally, certain example scenarios can involve communication system 10 providing for automated reminder generation for birthdays, end-of-quarter deadlines, product ship dates, or any other relevant timeline, which may be personal or business in nature.

It should be noted that embodiments of the present disclosure can involve which actions to be taken based, at least in part, on the presence of one or more keywords, patterns of specific words, etc. For example, a set of actions can be derived from analyzing conversations in which an entire string of information is evaluated. The actions can include any suitable calendaring activities such as checking participant availability, setting up follow-up meetings, reminders based on meetings between individuals, assigning action items, etc. Hence, information that is extracted can be used to trigger workflows associated with any number of calendaring tools. Moreover, the architecture of the present disclosure can detect possible tasks based on conversations between individuals and, subsequently, automate those calendaring activities. For example, the system could list a number of potential automated options that it believes a given participant may want, where the participant can then decide if he/she would accept, modify, ignore, propose a new activity, etc. based on what was offered. Note that a given individual does not have to take any special action to be offered the list of options, which can include any number of suggestions, recommendations, etc.

Moreover, in using the calendaring activities, individuals will become more adept at taking advantage of its capabilities. Hence, during a given conversation (e.g., involving any suitable communication protocol), the end user would not have to continuously remember which subsequent meetings should be calendared, which follow-up action items should be addressed, etc. This could obviate the need for a specific individual to set up a follow-up meeting, check availabilities of other participants, gather information agendas, retrieve previous meeting minutes, etc. Furthermore, the system has access to the participants' contact information that allows for a quick calendaring entry involving any number of subsets of the participants. These subsets can be targeted based on the conversations that occurred during the original meeting. The participant information can be used to automatically suggest the next available timeslot, including suggesting a given location (e.g., by analyzing room availability that can accommodate particular technologies, the geographic information of the participants, etc.).

The architecture can have access to documents, recordings, stored information of any kind, etc. that it can send individuals to catch them up on previous meetings, meeting minutes, etc. Again, the system is offloading chores that would otherwise be relegated to a set of participants. Additionally, the system can continuously learn from speakers such that it becomes better at speaker recognition, conversation patterns, etc. The system can store speaker models in the system and, further, have those arranged by speaker IDs.

In one particular embodiment, the system would only take action on calendaring activities based on a two-way communication. For example, if Speaker 1 stated: “Let's meet tomorrow . . . ”, the counterparty would need to respond in the affirmative for the system to prompt the user for possible scheduling activities. In another example, the system can prompt a user for setting up a meeting if a specific time is mentioned. In other cases, the system can create a vocabulary of terms that indicate which terms are signs of an affirmative response (culturally) to ensure that the rules are sufficiently narrow before a user is prompted.

Returning back to this particular example of FIG. 1, FIG. 1 includes multiple endpoints 12a-f associated with various end users of a session. In general, endpoints may be geographically separated, where in this particular example, endpoints 12a-d are located in San Jose, Calif. and a set of counterparty endpoints are located in Chicago, Ill. FIG. 1 includes a communication tracking module 20a coupled to endpoints 12a-d.

Semantically, and in the context of a session, a name, a speaker ID, a username, a company ID, and/or a location of a participant can be used by communication tracking module 20a to assign a virtual position to the end user. This can also be used to determine how to trigger the reminders, calendar entries, events, etc. In certain embodiments, communication system 10 is configured to utilize its intelligence to interface with any number of calendaring systems (e.g., Microsoft Outlook™, WebEx™ Jabber™, Apple email, or any other suitable calendaring technology). For example, consider a phone conversation taking place on Wednesday the 15th, which provides the following transcript (that could readily be generated by communication tracking module 20a, which may include a speech to text engine):

Speaker 1: So is Raman's birthday tomorrow?

Speaker 2: No, it is on Friday. OR

Speaker 2: Yes.

Communication system 10 would evaluate this information and subsequently ask either one (or both) of the speakers if they would like to have a calendar reminder for the correct birthday, along with a potential invitation to contact Raman (which could be linked to his contact information associated with a given e-mail technology).

Consider another example involving an email/chat conversation between two people on Wednesday the 15th:

User 1: Can we meet on Friday to discuss this?

User 2: Sure. Does 9 AM PT work for you?

User 1: Sure.

Again, communication system 10 could automatically request if the user(s) would like to set up a meeting for 9 AM PT on Friday the 15th with User 1 and User 2. This could be done during the conversation using a simple communication/a simple solicitation (e.g., a pop-up query, etc.), or once the conversation ends. User preferences can be designated by each individual, by the administrator, etc., where these preferences can be accessed at any appropriate time in order to render a decision as to when to coordinate calendar events, calendar reminders, etc.

In one particular embodiment, extensions of such a use case could be applicable to any number of products, technologies, etc. For example, at the end of a Telepresence™ or WebEx™ meeting, the architecture can automatically schedule a follow-up call and, further, alert participants in case they have conflicting meetings, they are on personal time off (PTO), etc. such that timeslots already in the calendar system would be honored.

In one example implementation, communication tracking module 20a can be configured to use basic speech, words, text, audio of any kind, and/or pattern-recognition to identify opportune times to trigger calendaring activities. Once a given segment is detected for analysis, communication tracking module 20a can begin initiating the provisioning of calendaring activities. Note that the behavior, speech patterns, history, etc. of the participants can be accounted for in suggesting the calendaring activities. Along similar lines, the exact data segments to be targeted for analysis can be based on user histories (where they exist), previous speech patterns, previous behaviors, etc. It should be noted that any number of heuristics may be used in conjunction with the present disclosure in order to facilitate its operations. The heuristics can be used to trigger the calendaring activities and/or used to determine which segments of sessions should be analyzed for providing such activities. Where no such historical information is present, the system would simply evaluate a given session based on its programmed intelligence for evaluating when to trigger calendaring activities.

It is imperative to note that communication system 10 can alternatively leverage other parameters of speech in order to execute the intelligent reminder and calendaring activities, as discussed herein. In one example, any participant in the session can use a soft button in order to automatically prompt calendaring activities. In another example, the architecture can use pitch, rising pitch, various vocative chants, boundary tones, boundary conditions, delay, down step, pitch accents, or any modifications or hybrids of these elements, or any other suitable language processing parameter to achieve the intelligent calendaring activities being outlined herein. This includes proprietary characteristics (e.g., organically developed) that may be readily programmed into the architecture. For example, specific patterns, word exchanges, specific words, specific names, specific sounds, etc. can be programmed into an architecture to be one of the language processing parameters, which can be detected and subsequently used to make an intelligent calendaring decision for the participants.

Turning to FIG. 2, FIG. 2 is a simplified block diagram illustrating one possible set of implementation details associated with communication system 10. In this example, endpoints 12a and 12c are configured to interface with communication tracking module 20a, which is coupled to a network 40. Along similar rationales, a set of endpoints 12e and 12f are configured to interface with either communication tracking module 20a or 20f, which is provisioned within a cloud network 45. In one particular example, a given endpoint (such as 12e) includes a respective communication tracking module 20e, a culturally-based response module 56e, and a speech to text module 57e such that the endpoint can conduct (or at least share) some of the reminder, calendaring, etc. responsibilities, along with potentially assisting in identifying the triggers for initiating such activities.

In the particular implementation of FIG. 2, endpoints 12a, 12c, 12e, 12f include a respective processor 32a, 32c, 32e, 32f, a respective memory element 44a, 44c, 44e, 44f, a respective network interface 46a, 46c, 46e, 46f, a respective transmitting module 48a, 48c, 48e, 48f, and a respective receiving module 42a, 42c, 42e, 42f. Any one or more of these internal items of the endpoints may be consolidated or eliminated entirely, or varied considerably, where those modifications may be made based on particular communication needs, specific protocols, etc.

Networks 40 and 45 represent a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through communication system 10. Networks 40 and 45 offer a communicative interface between the endpoints and other network elements (e.g., communication tracking modules 20a, 20f), and may be any local area network (LAN), Intranet, extranet, wireless local area network (WLAN), metropolitan area network (MAN), wide area network (WAN), virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment. Networks 40 and 45 may implement a UDP/IP connection and use a TCP/IP communication protocol in particular embodiments of the present disclosure. However, networks 40 and 45 may alternatively implement any other suitable communication protocol for transmitting and receiving data packets within communication system 10. Networks 40 and 45 may foster any communications involving services, content, video, voice, or data more generally, as it is exchanged between end users and various network elements.

In one example implementation, communication tracking modules 20a, 20f include respective processors 52a, 52f, respective memory elements 54a, 54f, respective speech to text modules 57a, 57f, and respective culturally-based response modules 56a, 56f. Communication tracking modules 20a, 20f can be aware of (and potentially store) information about who is speaking, and/or who is being spoken to during the session. Communication tracking modules 20a, 20f can selectively trigger calendaring activities for various end users using any suitable analysis of the audio/video/media inputs.

In one particular instance, communication tracking modules 20a, 20f are network elements configured to exchange data in a network environment such that the intelligent language processing-based calendaring activities discussed herein is achieved. As used herein in this Specification, the term ‘network element’ is meant to encompass various types of routers, switches, gateways, bridges, loadbalancers, firewalls, servers, inline service nodes, proxies, processors, modules, or any other suitable device, network appliance, component, proprietary element, or object operable to exchange information in a network environment. The network element may include appropriate processors, memory elements, hardware and/or software to support (or otherwise execute) the activities associated with language processing-based calendaring, as outlined herein. Moreover, the network element may include any suitable components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.

In a specific implementation, communication tracking modules 20a, 20f include software to achieve (or to foster) the language processing-based calendaring operations, as outlined herein in this document. Furthermore, in one example, communication tracking modules 20a, 20f can have an internal structure (e.g., have a processor, a memory element, etc.) to facilitate some of the operations described herein. In other embodiments, all of these calendaring activities may be provided externally to these elements, or included in some other network element to achieve this intended functionality. Alternatively, any other network element can include this software (or reciprocating software) that can coordinate with communication tracking modules 20a, 20f in order to achieve the operations, as outlined herein.

Before turning to some of the additional operations of communication system 10, a brief discussion is provided about some of the infrastructure of FIG. 1. In the example of FIG. 1, each endpoint 12a-f is fitted discreetly along a desk and, further, is proximate to its associated participant or end user. Such endpoints could be provided in any other suitable location, as FIG. 1 only offers one of a multitude of possible implementations for the concepts presented herein. Note that the numerical and letter designations assigned to the endpoints do not connote any type of hierarchy; the designations are arbitrary and have been used for purposes of teaching only. These designations should not be construed in any way to limit their capabilities, functionalities, or applications in the potential environments that may benefit from the features of communication system 10.

In a particular example implementation, endpoints 12a-f are endpoints, which can assist in receiving and communicating video, audio, and/or multimedia data. Other types of endpoints are certainly within the broad scope of the outlined concept, and some of these example endpoints are further described below. Each endpoint 12a-f can be configured to interface with a respective multipoint manager element, such as communication tracking module 20a, which can be configured to coordinate and to process information being transmitted by the end users.

As illustrated in FIG. 1, a number of cameras 14a-14c and displays 15a-15c are provided for the conference. Displays 15a-15c can be configured to render images to be seen by the end users and, in this particular example, reflect a three-display design (e.g., a ‘triple’). Note that as used herein in this specification, the term ‘display’ is meant to connote any element that is capable of rendering an image during a video conference. This would necessarily be inclusive of any panel, screen, Telepresence display or wall, computer display, plasma element, television, monitor, or any other suitable surface or element that is capable of such a rendering.

In particular implementations, the components of communication system 10 may use specialized applications and hardware to create a system that can leverage a network. Communication system 10 can use Internet protocol (IP) technology and, further, can run on an integrated voice, video, and data network. The system can also support high quality, real-time voice, and video communications using broadband connections. It can further offer capabilities for ensuring quality of service (QoS), security, reliability, and high availability for high-bandwidth applications such as video. Power and Ethernet connections for all end users can be provided. Participants can use their laptops to access data for the meeting, join a meeting place protocol or a Web session, or stay connected to other applications throughout the meeting.

Endpoints 12a-f may be used by someone wishing to participate in a video conference, an audio conference, an e-mail conference, an instant messaging conference, etc. in communication system 10. The broad term ‘endpoint’ may be inclusive of devices used to initiate a communication, such as a switch, a console, a proprietary endpoint, a telephone, a mobile phone, a bridge, a computer, a personal digital assistant (PDA), a laptop or electronic notebook, an i-Phone, an iPad, a Google Droid, any other type of smartphone, or any other device, component, element, or object capable of initiating voice, audio, or data exchanges within communication system 10.

Endpoints 12a-f may also be inclusive of a suitable interface to an end user, such as a microphone, a display, or a keyboard or other terminal equipment. Endpoints 12a-f may also include any device that seeks to initiate a communication on behalf of another entity or element, such as a program, a database, or any other component, device, element, or object capable of initiating a voice or a data exchange within communication system 10. Data, as used herein, refers to any type of video, numeric, voice, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another. Additional details relating to endpoints 12a-f are provided below with reference to FIG. 2.

In operation, communication tracking module 20a can be configured to establish, or to foster a session between one or more end users, which may be located in various other sites and locations. Communication tracking module 20a can also coordinate and process various policies involving endpoints 12a-f. In general, communication tracking module 20a may communicate with endpoints 12a-f through any standard or proprietary conference control protocol. Communication tracking module 20a includes a switching component that determines which signals are to be routed to individual endpoints 12a-f. Communication tracking module 20a is configured to determine how individual end users are seen by others involved in the video conference. Furthermore, communication tracking module 20a can control the timing and coordination of this activity. Communication tracking module 20a can also include a media layer that can copy information or data, which can be subsequently retransmitted or simply forwarded along to one or more endpoints 12a-f.

Note that in certain example implementations, the language processing-based calendaring activities outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit [ASIC], digital signal processor [DSP] instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element [as shown in FIG. 2] can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor [as shown in FIG. 2] could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array [FPGA], an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.

Hence, any of the devices illustrated in the preceding FIGURES may include a processor that can execute software or an algorithm to perform the calendaring activities, as discussed in this Specification. Furthermore, communication tracking modules 20a, 20f can include memory elements for storing information to be used in achieving the intelligent calendaring activities, as outlined herein. These devices may further keep information in any suitable memory element [random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.], software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein (e.g., database, table, cache, key, etc.) should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’

FIG. 3 is a simplified flowchart 100 illustrating one potential operation associated with an embodiment of communication system 10. This particular flow may begin at 102, where meeting startup information is gathered. Hence, in this particular example, meeting startup information can be provided at the outset of a session. This information can include data related to participants, an agenda, etc., along with updates associated with attendees (as they join the meeting), information about video endpoints, nodes, speaker identifiers (IDs), etc. For example, user IDs, participant names, job titles, e-mail addresses, symbols, pictures, proper names, speaker identifications, graphics, avatars, or any other suitable identifier is collected.

[Note that the term ‘identifier’ is a broad term that includes all of these possibilities, and others that can suitably identify a given participant, endpoint, etc.] The identifier can be suitably stored it in any appropriate location (e.g., at communication tracking module 20a). The identifier collection activity can be performed manually by individual participants, by the endpoints themselves, or automatically provided by the architecture (e.g., through software provisioned in the architecture, through communication tracking module 20a, etc.).

At 104, speech prerequisites are provided. These can include any suitable objects such as speech itself, new metadata, new speakers, etc. Moreover, in terms of the speech processing itself, during the session, communication tracking module 20a has the intelligence to account for accents, language translation, affirmative/negative keywords, dialects, etc. In addition, communication tracking module 20a can tailor its evaluations to be culture specific.

As the conversation is being monitored, an event is identified at 106. In terms of events, new speech emanating from the session can be interpreted using any suitable protocol (e.g., speech-to-text (STT)). Other events can include new participants entering the session, leaving the session, changing their mode of communication (e.g., changing the devices, altering their communication pathways, adjusting their bandwidth consumption, bitrates, etc.). Communication system 10 can detect which participants are speaking, writing, etc. (in real-time) and which are being addressed and, further, use this information in order to trigger any number of calendar entries, events, reminders, etc.

At 108, the speech to text is processed. At 110, this information is converted to a conversation, where speaker identification tags are added. Rules are applied at 112 to identify a portion of the conversation of interest. At 114, the participants are identified based, at least in part, on meeting information, speech to text data, speaker identification, etc. In 116, a type of prompt is identified, where a separate workflow can be spawned. The workflows can include reminders 118, meeting set-up 120, or any information activity 122 (e.g., e-mail/address/contact/etc.).

Note the architecture can readily use speaker identification based on voice/speech patterns to determine participants and use this information when deciding whom to trigger the calendaring for (or for the specific participants that are in a subset of the larger conversation).

For example, when a person asks a question such as:

    • Person 1: Is it your anniversary tomorrow?
    • Person 2: Yes.

Person 2 may be joining from an endpoint/conference room that includes seven people. Speaker identification can be used to narrow the trigger down to a specific person in that room. Furthermore, this could additionally be extended to a use case for an approval without the need to actually sign a document or send an email. Consider the following example:

    • Person 1: Do you approve this design/merger/hiring decision?
    • Person 3: Yes. (Note that the system does nothing since the pre-meeting information/agenda listed Person 2 as the approver.)
    • Person 2: I agree.

The system can then auto-sign a document with the approver's digital print or trigger a pop-up to confirm the action of approval. Some of these options can be designated through simple user preferences.

Note that with the examples provided above, as well as numerous other examples provided herein, interaction may be described in terms of two, three, or four network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that communication system 10 (and its teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of communication system 10 as potentially applied to a myriad of other architectures. Additionally, although described with reference to particular scenarios, where a particular module, such as a vocative detector module, is provided within a network element, these modules can be provided externally, or consolidated and/or combined in any suitable fashion. In certain instances, such modules may be provided in a single proprietary unit.

It is also important to note that the steps in the appended diagrams illustrate only some of the possible signaling scenarios and patterns that may be executed by, or within, communication system 10. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of teachings provided herein. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by communication system 10 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings provided herein.

Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. It is also imperative to note that communication system 10 is entirely language independent. Different languages place different emphases and/or different stresses on their words. Moreover, even within the same language, different people have distinct dialects, language, patterns, and/or various ways of stating the same name, the same location, etc. Communication system 10 can readily be used in any such language environments, as the teachings of the present disclosure are equally applicable to all such alternatives and permutations.

In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.

Claims

1. A computer-implemented method comprising:

establishing a communication session involving a first endpoint and a second endpoint that are associated with the communication session;
monitoring a conversation associated with the communication session to identify a portion of the conversation that includes an exchange between a first participant and a second participant;
evaluating first data associated with the first participant and second data associated with the second participant during the exchange;
determining an identity of the first participant and an identity of the second participant using voice/speech pattern recognition; and
assigning one or more follow-up items to a third endpoint that was not initially involved in the communication session, wherein the assigning is based, at least in part, on the first data and the second data associated with the first participant and the second participant during the exchange.

2. The method of claim 1, further comprising: evaluating a first availability associated with the first participant and a second availability associated with the second participant; and suggesting a future meeting based, at least in part, on the first availability and the second availability.

3. The method of claim 2, wherein when the first data and the second data associated with the first participant and the second participant during the exchange include information for the future meeting, further comprising initiating a calendar entry for the future meeting based, at least in part, on the first availability and the second availability, and respective identities of the first and second participants.

4. The method of claim 3, further comprising: determining historical speech patterns and behaviors of the first and second participants, wherein the determined historical speech patterns and behaviors are used to identify the first and second participants, and wherein the speech patterns include at least one of pitch, rising pitch, various vocative chants, boundary tones, boundary conditions, delay, down step, and pitch accents.

5. The method of claim 3, further comprising determining whether to initiate the calendaring entry based on heuristics.

6. The method of claim 3, further comprising determining whether to initiate the calendaring entry based on previously defined speech patterns.

7. The method of claim 1, wherein the communication session is a selected from a group a consisting of: a) a video conference session; b) an audio conference session; c) an instant messaging session; and d) an e-mail session.

8. The method of claim 1, further comprising: presenting a pop-up window to offer a list of calendar options to the first participant and the second participant based, at least in part, on the first data and the second data associated with the first participant and the second participant during the exchange.

9. Non-transitory media that includes code for execution and when executed by a processor operable to perform operations comprising:

establishing a communication session involving a first endpoint and a second endpoint that are associated with the communication session;
monitoring a conversation associated with the communication session to identify a portion of the conversation that includes an exchange between a first participant and a second participant;
evaluating first data associated with the first participant and second data associated with the second participant during the exchange;
determining an identity of the first participant and an identity of the second participant using voice/speech pattern recognition; and
assigning one or more follow-up items to a third endpoint that was not initially involved in the communication session, wherein the assigning is based, at least in part, on the first data and the second data associated with the first participant and the second participant during the exchange.

10. The media of claim 9, the operations further comprising: evaluating a first availability associated with the first participant and a second availability associated with the second participant; and suggesting a future meeting based, at least in part, on the first availability and the second availability.

11. The media of claim 10, wherein when the first data and the second data associated with the first participant and the second participant during the exchange include information for the future meeting, the operations further comprising initiating a calendar entry for the future meeting based, at least in part, on the first availability and the second availability, and respective identities of the first and second participants.

12. The media of claim 11, the operations further comprising: determining historical speech patterns and behaviors of the first and second participants, wherein the determined historical speech patterns and behaviors are used to identify the first and second participants, and wherein the speech patterns include at least one of pitch, rising pitch, various vocative chants, boundary tones, boundary conditions, delay, down step, and pitch accents.

13. The media of claim 11, the operations further comprising: determining whether to initiate the calendaring entry based on heuristics.

14. The media of claim 11, the operations further comprising: determining whether to initiate the calendaring entry based on previously defined speech patterns.

15. The media of claim 9, the operations further comprising: presenting a pop-up window to offer a list of calendar options to the first participant and the second participant based, at least in part, on the first data and the second data associated with the first participant and the second participant during the exchange.

16. The media of claim 9, wherein the communication session is a selected from a group a consisting of: a) a video conference session; b) an audio conference session; c) an instant messaging session; and d) an e-mail session.

17. An apparatus comprising:

a communication interface configured to enable network communications in order to establish a communication session involving a first endpoint and a second endpoint that are associated with the communication session; and
a processor coupled to the communication interface, wherein the processor is configured to perform operations including: monitoring a conversation associated with the communication session to identify a portion of the conversation that includes an exchange between a first participant and a second participant; evaluating first data associated with the first participant and second data associated with the second participant during the exchange; determining an identity of the first participant and an identity of the second participant using voice/speech pattern recognition; and assigning one or more follow-up items to a third endpoint that was not initially involved in the communication session, wherein the assigning is based, at least in part, on the first data and the second data associated with the first participant and the second participant during the exchange.

18. The apparatus of claim 17, wherein the operations further include: evaluating a first availability associated with the first participant and a second availability associated with the second participant; and suggesting a future meeting based, at least in part, on the first availability and the second availability.

19. The apparatus of claim 18, wherein the operations further include: when the first data and the second data associated with the first participant and the second participant during the exchange include information for the future meeting, initiating a calendar entry for the future meeting based, at least in part, on the first availability and the second availability, and respective identities of the first and second participants.

20. The apparatus of claim 19, wherein the operations further include: determining historical speech patterns and behaviors of the first and second participants, wherein the determined historical speech patterns and behaviors are used to identify the first and second participants, and wherein the speech patterns include at least one of pitch, rising pitch, various vocative chants, boundary tones, boundary conditions, delay, down step, and pitch accents.

Patent History
Publication number: 20170353533
Type: Application
Filed: Aug 25, 2017
Publication Date: Dec 7, 2017
Inventor: Raman Thapar (Mountain View, CA)
Application Number: 15/686,459
Classifications
International Classification: H04L 29/08 (20060101); H04L 12/18 (20060101); H04L 29/06 (20060101); G06Q 10/10 (20120101); G10L 15/26 (20060101);