Method, apparatus, system, and article of manufacture for providing distributed convergence nodes in a communication network environment

Distributed convergence nodes referred to as “Supernodes”, embodied as a functional technology component within an end user application, automatically determine whether said component should become “active” and assume the responsibility of forwarding IP multicast data present on a LAN (which supports IP multicast communication) to a “Routing Supernode” via a WAN (which does not support IP multicast communication). The Routing Supernode, in turn, is responsible for forwarding that traffic to other Supernodes present on other LANs. The traffic sent to and from the Routing Supernode is sent via unicast communication. All Supernodes are responsible for forwarding traffic on their respective LAN across the WAN to a Routing Supernode, and for forwarding traffic received over the WAN from the Routing Supernode onto their own respective LANs. An election process determines which device in a LAN is to operate as a SuperNode.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

This disclosure relates generally to computer software and/or hardware for computer and communication systems networking, and more particularly but not exclusively relates to communication between devices through a communication network.

BACKGROUND INFORMATION

Highly scalable, high-bandwidth applications such as voice over IP (VoIP) systems frequently utilize Internet Protocol (IP) multicast technologies to efficiently distribute audio communications amongst large numbers of users. While an extremely efficient use of available network bandwidth, configuration of the IP multicast infrastructure can be an administratively intensive task requiring the cooperation and coordination of numerous stakeholders and their organizations. As the distribution of IP multicast data becomes even more widespread within an organization and between organizations, the administrative task increases exponentially, resulting in increased costs and time being incurred to set up and maintain the network infrastructure.

The issue of network infrastructure maintenance becomes even more complex and time-consuming when the distribution of IP multicast data is required over Wide Area Networks (WANs)—as opposed to the (relatively) simple task of distributing such IP multicast traffic over Local Area Networks (LANs).

BRIEF SUMMARY

One aspect provides a method for communicating in a communication network environment, the environment including at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable. The method includes:

  • electing a device in the first LAN as a first distributed convergence node;
  • designating a device in the second LAN as a second distributed convergence node, the second distributed convergence node being a routing distributed convergence node;
  • electing a device in the third LAN as a third distributed convergence node; and
  • communicating traffic between the first and third distributed convergence nodes via the routing distributed convergence node, wherein the traffic can be communicated between devices within each of the LANs using IP multicast communication, and wherein the traffic can be communicated between the first distributed convergence node and the routing distributed convergence node and between the routing distributed convergence node and the second distributed convergence node using unicast communication.

Another aspect provides a system for communicating in a communication network environment, the environment including at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable. The system includes:

  • first distributed convergence node means in the first LAN for communicating traffic with devices in the first LAN using IP multicast communication and for communicating traffic from the devices over the WAN via unicast communication;
  • second distributed convergence node means in the second LAN for receiving the traffic communicated via unicast communication over the WAN from the first distributed convergence node means, the second distributed convergence node means being a routing distributed convergence node means for forwarding the traffic over the WAN using unicast communication;
  • third distributed convergence node means in the third LAN for receiving the traffic communicated by the routing distributed convergence node over the WAN using unicast communication, the third distributed convergence node means further being for distributing the received traffic to devices in the third LAN via IP multicast communication and for communicating traffic from the devices over the WAN to the routing distributed convergence node via unicast communication; and
  • electing means for electing a device as first, second, and third distributed convergence nodes, the distributed convergence nodes being dynamically changeable as a result of the electing.

Still another aspect provides an apparatus adapted to be used in a communication network environment, the environment including at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable. The apparatus includes:

  • a device having a distributed convergence node module, the distributed convergence node module including:
  • an elector module to elect the device as a first distributed convergence node in the first LAN;
  • an identifier module to identify the device from other devices in the first LAN, including identification of the device as the elected first distributed convergence node; and
  • a network interface in cooperation with a processor to communicate with the other devices in the first LAN using IP multicast communication and to communicate with a routing distributed convergence node in the second LAN via the WAN using unicast communication if the device is elected as the first distributed convergence node, so as to allow the routing distributed convergence node to forward communication via the WAN between the first distributed convergence node in the first LAN and a third distributed convergence node in the third LAN.

Yet another aspect provides an apparatus adapted to be used in a communication network environment, the environment including at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable. The apparatus includes:

  • a device having a routing distributed convergence node module, the routing distributed convergence node module including:
  • an elector module to elect the device as a routing distributed convergence node in the second LAN;
  • an identifier module to identify the device from other devices in the second LAN, including identification of the device as the elected routing distributed convergence node; and
  • a network interface in cooperation with a processor to communicate with the other devices in the second LAN using IP multicast communication and to communicate with a first distributed convergence node in the first LAN via the WAN using unicast communication and to communicate with a third distributed convergence node in the third LAN via the WAN using unicast communication, so as to forward traffic between the first and third distributed convergence nodes over the WAN.

Still another aspect provides an article of manufacture adapted to be used in a communication network environment, the environment including at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable. The article of manufacture includes:

  • a computer-readable medium adapted to be installed in one of the devices and having computer-readable instructions stored thereon that are executable by a processor to:
  • elect the device as a distributed convergence node;
  • identify the device from other devices in a same LAN as the device, including identification of the device as the elected distributed convergence node; and
  • communicate with the other devices in the same LAN using IP multicast communication and communicate with another distributed convergence node via the WAN using unicast communication, so as to enable transparent communication between distributed convergence nodes of different LANs via the WAN using unicast communication in a manner that makes the WAN appear to be IP-multicast-capable.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Non-limiting and non-exhaustive embodiments are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified or unless the context is otherwise.

FIG. 1 is a logical system block diagram according to one embodiment. The diagram shows the manner in which a variety of endpoints (200-A through 200-C, 200-D through 200-F, 200-G through 200-I, and 200-J through 200-L) are each capable of communicating over IP multicast within their own IP multicast islands (110-A, 110-B, 110-C, and 110-D respectively), but are not able to communicate via IP multicast over the unicast network 120. In this embodiment, endpoints 200-F, 200-G, and 200-K establish unicast connections (300-A, 300-B, and 300-C respectively) across the unicast network 120 to a routing node 200-A.

FIG. 2 is a logical system block diagram in accordance with one embodiment. The diagram shows the manner in which, within an endpoint device 200, an application module 210 logically couples to an instance 400 of an embodiment. The instance 400, in turn, is coupled to the local network using IP multicast 100 as well as the unicast network 120.

FIG. 3 is a logical system block diagram depicting example components according to one embodiment 400.

FIG. 4 a logical state transition diagram according to one embodiment. The diagram depicts the state transition model followed by an elector module 420 in determining whether a node is to transition between active and inactive states.

FIG. 5 a logical flow chart diagram according to one embodiment. The diagram depicts the procedure followed within an identifier module 430 to determine site identification amongst nodes on an IP multicast network.

FIG. 6 is a logical flow chart diagram according to one embodiment. The diagram describes the procedure followed within a processor module 440 to forward data traffic received into the module 400 either through the capture of data traffic to or from the local IP multicast 100 or of data traffic received over the unicast WAN connection 120.

FIG. 7 is logical transaction diagram according to one embodiment. The diagram shows the interaction between a Supernode 400-A, a Routing Supernode 400-B, and a third Supernode 400-C. The various stages of interaction include a session setup stage 900-A, a registration stage 900-B, a data streaming stage 900-C, an unregistration stage 900-D, and a session teardown stage 900-E. To reduce complexity of the diagram, only stage 900-C is depicted as including Supernode 400-C. It should be understood that the same interaction present between nodes 400-A and 400-B is present between nodes 400-C and 400-B, as well as between other Supernodes that may be present.

DETAILED DESCRIPTION

In the following description, numerous specific details are given to provide a thorough understanding of embodiments. The embodiments can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the embodiments.

Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is as “including, but not limited to.”

The headings provided herein are for convenience only and do not interpret the scope or meaning of the embodiments.

One solution to problems described above is an embodiment wherein the very applications themselves that are used by users on their computers (or other types of devices) for communications with other such devices effectively become part of the network routing infrastructure, coordinating with other instances of the applications to efficiently distribute IP multicast data—especially but not exclusively over the WAN or other network where the most administrative complexity is required on an on-going basis. Such technology (including related functionality) is referred to at times herein as “Supernode(s)” and/or as equivalently as one or more “distributed convergence nodes” and/or more generally as at least one convergence node (including a plurality of convergence nodes or individual distributed convergence nodes).

At least some portions of the various embodiments of the Supernode technology can be implemented in conjunction with the various systems, apparatus, articles of manufacture, and/or methods disclosed in U.S. patent application Ser. No. 10/977,115, entitled “WIDE AREA VOICE ENVIRONMENT MULTI-CHANNEL COMMUNICATIONS SYSTEM AND METHOD,” filed Oct. 29, 2004, assigned to the same assignee (Twisted Pair Solutions, Inc.) as the present application, and incorporated herein by reference in its entirety.

According to one embodiment, a Supernode includes a functional technology component within an application that automatically determines whether it (the component) should become “active” and assume the responsibility of forwarding IP multicast data present on the LAN (or other network) across the WAN (or other network) to a “Routing Supernode” which, in turn, is responsible for forwarding that traffic to other Supernodes present on other LANs (or other networks). All Supernodes are responsible not only for forwarding traffic present on the LAN (or other network) across the WAN (or other network) to a Routing Supernode, but they are also responsible for forwarding traffic received over the WAN (or other network) from the Routing Supernode onto their own LANs (or other network)—thereby creating the appearance of a “flat” IP multicast network to the hosting application along with other multicast applications on the LAN (or other network). In effect, a device at location A (e.g., New York) can transparently communicate with another device at location B (e.g., Los Angeles) across what each believe to be a fully IP multicast enabled network. A feature though is that Supernodes at each location along with one or more Routing Supernodes are in actuality creating the appearance of a virtual “flat” IP multicast network even though IP multicast is truly only present at each location (the individual LANs or other networks) but not between the locations (across the WAN or other network). Such locations where IP multicast is available to applications but is bordered at some physical or logical boundary—beyond which IP multicast does not flow—is referred to herein as a “multicast island”.

A feature with Supernodes according to an embodiment is that they are part of the applications themselves (and not separate solutions), and that the Supernode components present within each application communicate with each other in near real-time over the IP network to determine which component housed on which device on the network will become the forwarding entity.

One embodiment's use in the form of an end-user application in a client device is not its only implementation. Another embodiment is also used on non-user computing devices such as server computers and specialized appliances. In either case (end-user or otherwise), the same functionality afforded by the embodiment(s) to one implementation may be afforded the other.

The functionality of the embodiment(s) is to create a virtualized IP multicast network comprising of two or more IP multicast enabled networks separated by one or more non-IP multicast capable networks. As such, an embodiment is responsible for inserting itself into an application or device designed for IP multicast such that data received from and transmitted to the IP multicast network by the application or device is relayed by unicast connection across the intervening non-IP multicast enabled networks. The result of this operation is that applications or devices across the entire network—including those on different sides of non-IP multicast enabled networks—are capable of communicating with each other using IP multicast even though IP multicast is not available end-to-end across the entire network.

For the sake of simplicity of explanation hereinafter, the various networks in which the embodiments are implemented will be described in terms of LANs and WANs. Embodiments may be implemented in other types of networks, which may be variations and/or combinations of WANs and LANs, or completely different from WANs and LANs.

As depicted in FIG. 1, in an embodiment, routing node 200-A functions to route traffic received across unicast connections 300-A, 300-B, and 300-C to all other unicast connections (and thus operates as a routing Supernode or as a routing distributed convergence node), as well as functioning to forward such traffic to its own local IP multicast network 100-A. Nodes receiving traffic over unicast connection from routing node 200-A follow similar operation—forwarding such traffic to their own respective IP multicast networks. For example: data received by routing node 200-A from endpoint 200-F over unicast connection 300-A is routed by routing node 200-A to endpoints 200-G and 200-K over their respective unicast connections 300-B and 300-C. In addition, routing node 200-A functions to forward traffic received over unicast connections to the local IP multicast network 100-A thereby making such traffic available to endpoints 200-B and 200-C. Similarly, endpoints receiving unicast traffic across the Wide Area Network 120 function to forward such traffic to their own local IP multicast network, making such traffic available to endpoints local to their respective networks. For example: traffic received from routing node 200-A by endpoint 200-K over unicast connection 300-C is forwarded by endpoint 200-K to the local IP multicast network 100-D making such traffic available as multicast traffic to endpoints 200-J and 200-L. In addition, nodes 200-A, 200-F, 200-G, and 200-K also serve to forward traffic received over the unicast WAN 120 to the application they are hosted within or coupled to, so as to create the same appearance of virtualized IP multicast for the hosting/coupled application as is created for other nodes on each node's respective local IP multicast network.

In one embodiment, each of said endpoints is associated with a network address, such as an IP address. The network address of any particular endpoint designated/elected as a Supernode or as a Routing Supernode can be made known to all other Supernodes. The address can be made known, for example, by statically programming or otherwise providing each Supernode with the IP address of a Routing Supernode. Alternatively or additionally, the IP address of the Routing Supernode can be made known to other Supernodes in a dynamic manner, such as by broadcasting the address or otherwise communicating the address to the various Supernodes.

According to various embodiments, the nodes 200 may implemented on or as a device such as a client device and/or on non-user computing devices such as server computers and specialized appliances. Examples of client devices include, but are not limited to, personal computers (PCs), laptops, wireless devices (such as cellular telephones, PDAs, and so forth), set top boxes, and/or any other portable or stationary electronic communication device that can have network connectivity. Examples of non-user devices can include servers (as mentioned above), routers, switches, and other wireless and/or hardwired device that can have network connectivity.

Node Election

An embodiment of a Supernode module 400 as depicted in FIG. 2 and FIG. 3, upon learning of the unique IP address/port pairs of the IP multicast data streams that an application 210 is currently processing, creates a state machine within itself in elector module 420 to represent that particular address/port pair. Such learning may occur in a multitude of ways including, but not limited to, static configuration, via an application programming interface 410 provided to the application by an embodiment, and through insertion in the pathway between the application and the IP multicast network.

In a similar embodiment, the elector module 420 is responsible for determining whether the current instance of the embodiment will be responsible for processing IP multicast traffic across a unicast network, or whether another instance on the same IP multicast network will be the responsible proxy entity. Such determination of a state of being active or inactive is made through a state machine diagrammed in FIG. 4 wherein an election token, once generated by each instance of the elector on the network, is utilized in conjunction with the state machine's operation. The token may take the form of a simple random number or be calculated using varying degrees of sophistication including, but not limited to, the device's current resource utilization including, but not limited to memory, CPU, network bandwidth, and disk space. The election token may also include a variety of other components such as instance rank level, values indicating a device's desire (or lack thereof) to become active, etc. Note that the list presented is not fully exhaustive of the various ways in which an election token may be determined.

In an embodiment of the elector module 420 described in FIG. 4, the state machine within the elector module 420, transitions between different states. The elector listens on the local IP multicast for election tokens from other instances of the elector or similarly implemented or compatible embodiments on that IP multicast network. Upon receipt of varying messages types and/or expiration of a timer within the elector module, the state machine determines, based on comparison of numerical values of the election token received from peers (denoted as “Peer Token” in FIG. 4) and its own token (denoted as “My Token” in FIG. 4), whether the current instance of the embodiment shall transition to active or inactive states. In one example embodiment, a particular elector module 420 “wins” the election if its token (in the form of a random number) has the least/smallest value as compared to the random number value of the tokens of its peers. Of course, this is only one example implementation for determining a winner—other embodiments may use other types of criteria for determining the winner of the election.

In the event an instance transitions to an active state, that instance begins transmitting its own token onto the local IP multicast network such that other instances of the elector on such local IP multicast may process the token according to similar or compatible embodiments of the state machine logic.

Upon determination that the current entity is to be the active entity, the processor module 440 is notified of such determination—constituting an activation of forwarding of IP multicast data traffic across a unicast network connection. At the same time, the elector transitions to a state of sending out its own beacon until such time that another elector on the network takes over control as described above.

Once forwarding is activated, the processor module 440 captures IP multicast traffic received and transmitted by the application 210, forwarding such traffic across the unicast network connection 120 via the network interface module 450. Such forwarding takes places in one embodiment only if the far-end has registered a desire in receiving such traffic. Such determination is made by the far-end and communicated to the processor module 440 on the local active entity via the unicast connection 120.

Processor

The operation of the processor 440, as depicted in FIG. 6 rests in one embodiment on the source of the data traffic entering the processor module. If the traffic was received over a unicast connection, that data traffic is passed on to the hosting/coupled application; giving the application the appearance that the data traffic was received on its own local IP multicast interface. Such passing of data traffic from the embodiment to the hosting/coupled application may take a number of forms including, but not limited to, notification from the embodiment to the application through an application programming interface or insertion into the data traffic flow between the application and the network interface.

If the traffic was received from the application either through notification of the application to the instance of the embodiment through an application programming interface, through insertion into the flow of network data traffic between the application and the network interface, or other viable means of interception or data traffic capture; that data is encrypted, encapsulated, and distributed via the processor module 440 to all far-end entities coupled to the instance of the embodiment and who have registered a desire to receive such traffic.

In an embodiment, the processor module 440 makes use of well-known encryption logic such as the Advanced Encryption Standard (AES) or other suitable encryption technique to encrypt data to be transmitted across the unicast network 120. The receiving end of the unicast connection 120, upon receiving data traffic from a unicast transmitter, proceeds to decrypt that data traffic using the decryption logic employed by the encryption module on the transmitting end.

Additionally, in an embodiment, the processor module 440, may optionally make use of data filtering and conversion functionality to facilitate enhanced transmission of forwarded data across the unicast connection 120. Such data filtering may include, but is not limited, to media re-packetization and transcoding; being the conversion of such media between different data packet sizes and/or encoding types for purposes of bandwidth reduction, media enhancement, and other media-modification features desired by a user. Data filtering may also include specialized data caching to further reduce the transport of redundant data across the unicast link.

Site Identification

Returning to FIG. 5 wherein determination of a site identifier is depicted, an instance of an embodiment determines, at initiation of operation, what the unique identifier is for the location or “site” where the instance is operating. Such an identifier is useful to the efficient operation of the embodiment as it is used to communicate to nodes at other sites the fact that a particular site—including the individual application entities at that site—are no longer present on the network. (The term “network” here being understood for one embodiment to be the entire network and not just the individual site network or component of the entire network). Such tracking of the presence of individual devices at remote locations allow devices at other locations to quickly and efficiently add or remove presence information of said devices in the event of network outages and other unforeseen events.

In an embodiment, determination of the site identifier is accomplished by the flow chart depicted in FIG. 5. At initiation of activity, a local instance of the identifier module 430 begins by listening on the local IP multicast network for a message from another similar or compatible entity transmitting a site identifier. If such a message is received, the local instance stores this identifier and proceeds to use it in its operation as described below.

If no such identifier is received within a reasonable time, the local instance determines whether it had previously received and stored an identifier. If this is not the case, the local instance proceeds to generate and store its own unique identifier according to a unique identifier generation scheme such as the algorithm utilized to calculate a Globally Unique Identifier (GUID).

Subsequently, the local instance begins on-going transmission of the site identifier—whether previously received and stored or previously generated and stored. Once this process begins, an embodiment will continue to do so until such time the instance of the embodiment becomes inactive or is shut down.

Session Setup

In an embodiment, establishment and maintenance of a “session” between two unicast entities is contemplated. Such a session is deemed to be established and maintained for the duration of the existence of a need for the two entities on either end of a unicast network to be coupled.

In an embodiment, the establishment of a session is implemented via a reliable unicast connection such as TCP between two entities—for example nodes 200-F and 200-A from FIG. 1 and depicted as 400-A and 400-B in FIG. 7. Such establishment as shown in FIG. 7 item 900-A comprises, of a multi-stage interaction wherein the connecting entity 400-A initiates a connection through a session setup request to entity 400-B. Such request, upon being received by the entity 400-B causes entity 400-B to generate a session encryption key to be used for encryption purposes in all subsequent interactions. The generation of the session encryption key may take the form of a number of methods including, but not limited to, public/private key generation as part of an asymmetric cryptography technique such as Diffie-Helman, DSS, and RSA. This key is then communicated back to entity 400-A from entity 400-B using the unicast connection established during the entity 400-A's session setup request.

The next step during stage 900-A constitutes entity 400-A encrypting (using the encryption keys generated and agreed upon during the step described above and an agreed-upon or previously configured algorithm such as AES) access, authentication, and authorization (AAA) information including, but not limited to, a system identifier, a unique location or “site” identifier, client authorization and other identifying characteristics required for the establishment of a session between entity 400-A and 400-B. Such encrypted information is transmitted to entity 400-B by entity 400-A over the unicast connection.

Upon receipt of aforementioned AAA information, entity 400-B proceeds to grant or deny access to entity 400-A—resulting in the final step in stage 900-A of an acknowledgement of the session establishment. Such processing of AAA information may include, but not be limited to, self-processing by entity 400-B or entity 400-B interfacing with an external entity such as RADIUS or ActiveDirectory for full or partial processing of the AAA information.

Session Streaming

Upon establishment of the session, an embodiment causes an iterative interaction 900-B, 900-C, and 900-D to ensue over the course of time, during which entity 400-A registers its intent to forward and receive data traffic for unique address/port pairs the hosting/coupled application is processing. Such intent is based on activation of the processor module for unique address/port pairs as determined by the elector module 420 and described above. During a registration, entity 400-A notifies its intent to forward and receive traffic for unique stream identifiers, including but not limited to, address/port pairs by transmitting details of said stream identifiers to the routing entity 400-B. This action causes entity 400-B to establish forwarding and routing tables within itself in processor module 440 such that traffic received into entity 400-B is forwarded to other coupled entities over unicast whom have similarly registered such stream identifiers. In response to the registration notification as described above, entity 400-B transmits back to entity 400-A acceptance of the registration notification. This action causes entity 400-A to begin forwarding of self-generated and local IP multicast traffic as, described above, to entity 400-B for distribution according to the logic flow chart depicting such in FIG. 6. This action also causes entity 400-B to include distribution of locally received IP multicast data as well as data received over unicast from other coupled entities (such as 400-C in FIG. 7) to entity 400-A.

Streaming of data over the unicast connection is then maintained for the duration of the registration. In an embodiment, such streaming may occur over a reliable unicast connection (such as TCP), over a “best-effort” connection utilizing a protocol such as UDP, or combination thereof according to desired preconfigured or dynamically determined performance requirements. In such an embodiment, and where a best-effort unicast connection is utilized for data streaming, entities participating in the unicast data stream connection may actively communicate from receiving entity to transmitting entity information such as packet loss statistics and recommendations for packet loss concealment techniques to be employed. Such packet loss concealment techniques include, but are not limited to, oversending of packets by the transmitting entity, inclusion of components of previously transmitted packets within a packet, sequence numbers to track lost packet numbers for purposes of requesting resends of individual lost packets, and so forth. Note that numerous varieties and embodiments of packet loss concealment techniques exist and the afore-mentioned list does not constitute an exhaustive list of such techniques that may be employed by an embodiment.

In an embodiment, data streamed over the unicast connection (reliable, best-effort, or otherwise) is encrypted utilizing the previously described encryption keys and associated algorithms. Such encrypted data constitutes the payload being transmitted over the unicast connection and is preceded by encapsulation information such that the receiving entity may properly process the streamed data. Such encapsulation information includes, but is not limited to, the unique identifier of the transmitting entity and the source stream identifier from whence the payload was obtained (and therefore the destination to which the transmitting unicast endpoint wishes the data to be forwarded to). Such streaming interaction continues for the duration of the registration.

In the event a node such as 400-A becomes inactive for a particular stream identifier in accordance with the election logic detailed earlier, node 400-A proceeds to notify entity 400-B of its intent to stop further processing of data for that particular stream identifier. Such notification is similar in nature to the registration process described earlier—the difference being that an unregistration is performed rather than a registration operation. In response, entity 400-B proceeds to remove from its routing table details of forwarding for the unique stream identifier for the unregistering entity and ceases to process data traffic received over the unicast connection from that entity.

Part of the process of streaming data is the contemplation of detection of duplicated data. Such an event may occur due to a variety of reasons including, but not limited to transmission latency over the various intervening networks, erroneous implementations within embodiments, invalid configurations by maintenance personnel or systems, and so forth. Such occurrences' may result in temporary or sustained duplication of data traffic received from simultaneously active nodes within a particular IP multicast network.

Duplicate Detection

In an embodiment, duplicate detection falls within the purview of the processor module 440, which examines each packet and keeps track of a list of previously processed packets. For each packet a unique signature is calculated according to the MD5 algorithm. This signature is stored in a list and the signature of each packet entering the processor module 440 is compared against this list. In the event a duplicate signature is found and certain thresholds are met, the packet is rejected—preventing further forwarding of the duplicate packet and thereby creating a packet loop. The parameters defined for the length of the list and the relevant thresholds beyond which packets are not forwarded may be actively determined by the embodiment and/or defined by personnel or other devices acting in a maintenance role. It is noted that in one embodiment, the algorithm used for packet signature determination may include MD5 but is not limited to such algorithm. Any viable and applicable algorithm may be utilized for this purpose in various embodiments.

In summary of this description, interactions 900-B, 900-C, and 900-D continue iteratively over the course of the session being established.

Session Teardown

In the event an entity of an embodiment becoming wholly inactive or in the situation where no stream identifier are being processed by said entity, the session previously established during step 900-A is destroyed. This interaction takes the form of step 900-E wherein a session teardown message is transmitted by entity 400-A to entity 400-B. The action taken by entity 400-B in response to such message is to remove all entries for entity 400-A from its internal routing tables and to cease forwarding traffic to or processing traffic from entity 400-A. In an embodiment, such “ordered” teardown is not strictly required as a simple disconnection of the unicast connection between the entities is sufficient enough to constitute an automatic tear down within each entity.

The various operations represented in the illustrated flowcharts and described above can be implemented in one embodiment by software or other computer-readable instructions encoded on or otherwise stored on a computer-readable medium (such as a memory in the form of ROM, RAM, other type of hardware memory), and executable by one or more processors. For example, the processor and computer-readable medium storing the computer-readable instructions can be present in one or more of the devices described above, such as at the devices implementing the nodes 200-A, 200-F, etc. The processor 440, for example in one embodiment, of the node 200 can execute the computer-readable instructions stored in a memory or other computer-readable storage medium at the node 200. In one embodiment, the various modules/components shown in FIGS. 2-3 can be implemented by software, hardware, and/or a combination of both. For instance, the application 210 and certain components of the module 400 (shown in FIG. 3) can be implemented as software stored on the computer-readable medium, and executable by the processor 440 (such as a processor implemented at least in part by hardware).

The various embodiments described above can be combined to provide further embodiments. All of the commonly assigned US patent application publications, US patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, including but not limited to U.S. Provisional Patent Application Ser. No. 60/908,878, filed Mar. 29, 2007 and entitled “METHOD, APPARATUS, SYSTEM, AND ARTICLE OF MANUFACTURE FOR PROVIDING SUPERNODES IN A COMMUNICATION NETWORK ENVIRONMENT,” U.S. patent application Ser. No. 12/057,289, filed Mar. 27, 2008 and entitled “METHOD, APPARATUS, SYSTEM, AND ARTICLE OF MANUFACTURE FOR PROVIDING SUPERNODES IN A COMMUNICATION NETWORK ENVIRONMENT;” and U.S. patent application Ser. No. 12/724,244, filed Mar. 15, 2010 and entitled “METHOD, APPARATUS, SYSTEM, AND ARTICLE OF MANUFACTURE FOR PROVIDING SUPERNODES IN A COMMUNICATION NETWORK ENVIRONMENT” are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary, to employ systems, circuits and concepts of the various patents, applications and publications to provide yet further embodiments.

The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. While specific embodiments and examples are described herein for illustrative purposes, various equivalent modifications are possible and can be made.

For example, embodiments are not restricted to any particular data type, end device type, data format, communication format or protocol, manufacturer device model, network device type, specific sequence of operations (for example, some operations described herein may be performed sequentially and/or simultaneously), etc.

These and other modifications can be made to the embodiments in light of the above detailed description. The terms used in the following claims should not be construed to be limited to the specific embodiments disclosed in the specification and the claims. Instead, the terms used in the claims should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

Claims

1. A method of operation in a communication network environment, the communication network environment including a number of networked client devices that execute end-user applications and a number of non-user network infrastructure devices, the networked client devices and the non-user network infrastructure devices communicatively coupled in at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable, the method comprising:

dynamically electing ones of the networked client devices in the first LAN as a first LAN distributed convergence node at respective times, at least the elected networked client device in the first LAN executing at least one end-user application;
designating a device in the second LAN as a second LAN distributed convergence node, the second LAN distributed convergence node being a routing distributed convergence node;
dynamically electing ones of the networked client devices in the third LAN as a third LAN distributed convergence node at respective times, at least the elected networked client device in the third LAN executing at least one end-user application; and
communicating traffic between the first and the third LAN distributed convergence nodes via the routing distributed convergence node, wherein the traffic is communicated between the networked client devices within each of the first, the second and the third LANs via the end-user applications executing on the elected networked client devices using IP multicast communication, and the traffic is communicated between the first distributed convergence node and the routing distributed convergence node and between the routing distributed convergence node and the second distributed convergence node using unicast communication.

2. The method of claim 1 wherein designating the device in the second LAN as the second LAN distributed convergence node includes dynamically electing one of the networked client devices of the second LAN to communicate traffic between the first and the third LAN distributed convergence nodes.

3. The method of claim 1 wherein designating the device in the second LAN as the second LAN distributed convergence node includes causing a component of an end-user application executing on one of the networked client devices of the second LAN to communicate traffic between the first and the third LAN distributed convergence nodes.

4. The method of claim 2 wherein dynamically electing ones of the networked client devices in any of the first or third LANs includes electing the respective networked client device based at least in part on information from a number of peer networked client devices in the respective LAN to which the elected networked client device belongs.

5. The method of claim 4 wherein electing the respective networked client device based at least in part on information from a number of peer networked client devices in the respective LAN to which the elected networked client device belongs includes passing at least one token between at least some of the networked client devices of the respective LAN.

6. The method of claim 1 wherein communicating traffic between first and the third LAN distributed convergence nodes via the routing distributed convergence node includes transparently communicating across the WAN via the routing distributed convergence node, such that each of the first and the third distributed convergence nodes communicate with each other via the respective end-user applications executing on the elected ones of the networked client devices.

7. The method of claim 1, further comprising at least one or more of:

determining a site identifier associated with each distributed convergence node;
performing dynamic oversending;
applying security to traffic sent to and from the routing distributed convergence node; or
performing encapsulation of data contained in the traffic sent to and from the routing distributed convergence node.

8. The method of claim 1 wherein each distributed convergence node, in response to receiving first traffic from any other networked client device in the respective LAN via IP multicast communication, forwards the first traffic to the routing distributed convergence node via unicast communication, and further receives second traffic from the routing distributed convergence node via unicast communication and distributes the second traffic to other networked client devices of the respective LAN via IP multicast communication.

9. The method of claim 1 wherein dynamically electing includes using a state machine process to activate a state of a respective end user application executing on the networked client device to indicate election thereof, a deactivated state of the networked client device indicating non-election thereof.

10. The method of claim 1 wherein the communicating includes:

performing a session setup stage between the end user application executing at the first LAN distributed convergence node and the routing distributed convergence node;
performing a registration stage between the end user application executing at the first LAN distributed convergence node and the routing distributed convergence node;
conducting a data streaming stage between the end user application executing at the first LAN distributed convergence node and the routing distributed convergence node and between the routing distributed convergence node and the end user application executing at the third LAN distributed convergence node;
performing an unregistration stage between the end user application executing at the first LAN distributed convergence node and the routing distributed convergence node; and
performing a session teardown stage between the end user application executing at the first LAN distributed convergence node and the routing distributed convergence node.

11. The method of claim 1 wherein communicating traffic between the first and the third LAN distributed convergence nodes via the routing distributed convergence node includes communicating traffic via non-user network infrastructure devices in the form of at least one of a router or a server, in addition to the elected networked client devices.

12. The method of claim 1 wherein dynamically electing ones of the networked client devices in the first LAN as a first LAN distributed convergence node at respective times includes electing a first one of the networked client devices in the first LAN as the first LAN distributed convergence node at a first time, and electing a second one of the networked client devices in the first LAN as the first LAN distributed convergence node at a second time.

13. A system for communicating in a communication network environment, the communication network environment including a number of networked client devices that execute respective instances of end-user applications and a number of non-user network infrastructure devices, the networked client devices and the non-user network infrastructure devices communicatively coupled in at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable, the system comprising:

first distributed convergence node means in the first LAN for communicating traffic with networked client devices of the first LAN using IP multicast communication and for communicating traffic from the devices over the WAN via unicast communication;
second distributed convergence node means in the second LAN for receiving the traffic communicated via unicast communication over the WAN from the first distributed convergence node means, the second distributed convergence node means being a routing distributed convergence node means for forwarding the traffic over the WAN using unicast communication;
third distributed convergence node means in the third LAN for receiving the traffic communicated by the routing distributed convergence node means over the WAN using unicast communication, the third distributed convergence node means further being for distributing the received traffic to networked client devices of the third LAN via IP multicast communication and for communicating traffic from the devices over said WAN to the routing distributed convergence node means via unicast communication; and
electing means for electing respective ones of the networked client devices as the first, the second, and the third distributed convergence nodes, the distributed convergence nodes being dynamically changeable as a result of the electing.

14. The system of claim 13, further comprising at least one or more of:

means for determining a site identifier associated with each distributed convergence node;
means for performing dynamic oversending;
means for applying security to traffic sent to and from the routing distributed convergence node;
means for detection duplicate packets in traffic; or
means for performing encapsulation of data contained in the traffic sent to and from the routing distributed convergence node.

15. The system of claim 13, further comprising:

means for performing a session setup stage between the first distributed convergence node and the routing distributed convergence node;
means for performing a registration stage between the first distributed convergence node and the routing distributed convergence node;
means for conducting a data streaming stage between the first distributed convergence node and the routing distributed convergence node and between the routing distributed convergence node and the third distributed convergence node;
means for performing an unregistration stage between the first distributed convergence node and the routing distributed convergence node; and
means for performing a session teardown stage between the first distributed convergence node and the routing distributed convergence node.

16. The system of claim 13 wherein the first, the second, and the third distributed convergence node means include a component of a respective instance of at least one end user application residing on a networked client device.

17. An apparatus adapted to be used in a communication network environment, the communication network environment including a number of networked client devices that execute respective instances of end-user applications and a number of non-user devices in the form of routers or servers, the networked client devices and non-user devices communicatively coupled in at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable, the apparatus comprising:

one of the networked client devices, having a distributed convergence node module at an application level of one of the end-user applications executing on the one of the networked client devices, the distributed convergence node module including: an elector module that elects the one of networked client devices as a first distributed convergence node in the first LAN; an identifier module that identifies the one of the networked client devices from other networked client devices in the first LAN, including identification of the one of the networked client devices as the elected first distributed convergence node; and a network interface in communicatively coupled with a processor to communicate with the other networked client devices in the first LAN using IP multicast communication and to communicate with a routing distributed convergence node in the second LAN via the WAN using unicast communication when the networked client device is elected as the first distributed convergence node.

18. The apparatus of claim 17 wherein the distributed convergence node module is embodied in instructions executable by the processor and stored in at least one non-transistory processor-readable medium.

19. The apparatus of claim 17 wherein communicating traffic between the first and the third LAN distributed convergence nodes via the routing distributed convergence node includes communicating traffic via non-user network infrastructure devices in the form of at least one of a router or a server, in addition to the elected networked client devices.

20. An apparatus adapted to be used in a communication network environment, the communication network environment including a number of networked client devices that execute respective instances of end-user applications and a number of non-user devices in the form of routers or servers, the networked client devices and non-user devices communicatively coupled in at least a first, a second, and a third local area network (LAN) separated from each other by a wide area network (WAN), the LANs being IP-multicast-capable and the WAN being non-IP-multicast-capable, the apparatus comprising:

one of the networked client devices having a routing distributed convergence node module at an application level of one of the end-user applications executing on the one of the networked client devices, the routing distributed convergence node module including:
an elector module to elect the one of the networked client devices as a routing distributed convergence node in the second LAN;
an identifier module to identify the one of the networked client devices from other networked client devices of the second LAN, including identification of the one of the networked client devices as the elected routing distributed convergence node; and
a network interface communicatively coupled with a processor to communicate multicast data with the other networked client devices of the second LAN using IP multicast connections and to communicate the multicast data with a first distributed convergence node in the first LAN via the non-IP-multicast-capable WAN using a first unicast connection and to communicate the multicast data with a third distributed convergence node in the third LAN via the non-IP-multicast-capable WAN using a second unicast connection, so as to forward the multicast data between the first and the third distributed convergence nodes over the non-IP-multicast-capable WAN using the first and the second unicast connections, the routing distributed convergence node in the second LAN configured to forward multicast data from each distributed convergence node of a plurality of LANs including the first and the third LANs to each other distributed convergence node of the plurality of LANs using unicast connections through the non-IP-multicast-capable WAN.

21. The apparatus of claim 20 wherein the routing distributed convergence node module is embodied in instructions executable by the processor processor and stored in at least one non-transistory processor-readable medium.

22. The apparatus of claim 20 wherein the routing distributed convergence node module is adapted to perform a session setup stage between the first distributed convergence node and the routing distributed convergence node, perform a registration stage between the first distributed convergence node and the routing distributed convergence node, conduct a data streaming stage between the first distributed convergence node and the third distributed convergence node, perform an unregistration stage between the first distributed convergence node and the routing distributed convergence node, and perform a session teardown stage between the first distributed convergence node and the routing distributed convergence node.

23. The apparatus of claim 20 wherein communicating traffic between the first and the third LAN distributed convergence nodes via the routing distributed convergence node includes communicating traffic via non-user network infrastructure devices in the form of at least one of a router or a server, in addition to the elected networked client devices.

Referenced Cited
U.S. Patent Documents
5327428 July 5, 1994 Van As et al.
5412654 May 2, 1995 Perkins
5426637 June 20, 1995 Derby et al.
5617539 April 1, 1997 Ludwig et al.
5717689 February 10, 1998 Ayanoglu
5903559 May 11, 1999 Acharya et al.
5916302 June 29, 1999 Dunn et al.
5978851 November 2, 1999 Kayama et al.
5987011 November 16, 1999 Toh
5987518 November 16, 1999 Gotwald
6021119 February 1, 2000 Derks et al.
6122281 September 19, 2000 Donovan et al.
6130880 October 10, 2000 Naudus et al.
6131123 October 10, 2000 Hurst et al.
6163692 December 19, 2000 Chakrabarti et al.
6275575 August 14, 2001 Wu
6314089 November 6, 2001 Szlam et al.
6477366 November 5, 2002 Valentine et al.
6501739 December 31, 2002 Cohen
6504836 January 7, 2003 Li et al.
6535486 March 18, 2003 Naudus et al.
6563793 May 13, 2003 Golden et al.
6591301 July 8, 2003 Li et al.
6603965 August 5, 2003 Dinkin
6628625 September 30, 2003 Birdwell et al.
6631415 October 7, 2003 James et al.
6647430 November 11, 2003 Minyard et al.
6697342 February 24, 2004 Smyth et al.
6748447 June 8, 2004 Basani et al.
6751200 June 15, 2004 Larsson et al.
6775258 August 10, 2004 van Valkenburg et al.
6795688 September 21, 2004 Plasson et al.
6873627 March 29, 2005 Miller et al.
6963563 November 8, 2005 Wong et al.
7103011 September 5, 2006 Kumar et al.
7286532 October 23, 2007 Rachepalli et al.
7483397 January 27, 2009 Meier et al.
7512124 March 31, 2009 Sangroniz et al.
7522537 April 21, 2009 Joshi
7522731 April 21, 2009 Klemba et al.
7698463 April 13, 2010 Ogier et al.
7710961 May 4, 2010 Miller et al.
7734293 June 8, 2010 Zilliacus et al.
7876756 January 25, 2011 Muramoto et al.
7899951 March 1, 2011 Mahany et al.
7961646 June 14, 2011 Liu et al.
7961694 June 14, 2011 Chan et al.
8005952 August 23, 2011 Kammer et al.
8081629 December 20, 2011 Miller et al.
8121057 February 21, 2012 Botha et al.
8340094 December 25, 2012 Clack et al.
20010005368 June 28, 2001 Rune
20010049283 December 6, 2001 Thomas
20010055279 December 27, 2001 Abe
20020029278 March 7, 2002 Shiouchi et al.
20020044549 April 18, 2002 Johansson et al.
20020064149 May 30, 2002 Elliott et al.
20020069278 June 6, 2002 Forslöw
20020101829 August 1, 2002 Murai et al.
20020161841 October 31, 2002 Kinnunen
20020188865 December 12, 2002 Joseph et al.
20020191612 December 19, 2002 Curtis
20020196802 December 26, 2002 Sakov et al.
20030002448 January 2, 2003 Laursen et al.
20030012149 January 16, 2003 Maggenti et al.
20030018792 January 23, 2003 Shiouchi et al.
20030037109 February 20, 2003 Newman et al.
20030037160 February 20, 2003 Wall et al.
20030041141 February 27, 2003 Abdelaziz et al.
20030058858 March 27, 2003 Berlyoung et al.
20030128689 July 10, 2003 Peirce, Jr. et al.
20030135638 July 17, 2003 Brabson et al.
20030137959 July 24, 2003 Nebiker et al.
20030210656 November 13, 2003 Biacs et al.
20030233538 December 18, 2003 Dutertre
20040001446 January 1, 2004 Bhatia et al.
20040052218 March 18, 2004 Knappe
20040141511 July 22, 2004 Rune et al.
20040233855 November 25, 2004 Gutierrez et al.
20040233881 November 25, 2004 Kang et al.
20040260814 December 23, 2004 Budge et al.
20040264466 December 30, 2004 Huang
20050008024 January 13, 2005 Newpol et al.
20050021616 January 27, 2005 Rajahalme et al.
20050100016 May 12, 2005 Miller et al.
20050135286 June 23, 2005 Nurminen et al.
20050152305 July 14, 2005 Ji et al.
20060126587 June 15, 2006 Tsubota
20060146821 July 6, 2006 Singh et al.
20060146921 July 6, 2006 Vermeersch et al.
20060159090 July 20, 2006 Chang et al.
20060182034 August 17, 2006 Klinker et al.
20060212582 September 21, 2006 Gupta et al.
20060218301 September 28, 2006 O'Toole et al.
20060268749 November 30, 2006 Rahman et al.
20060285529 December 21, 2006 Hares et al.
20070067487 March 22, 2007 Freebairn et al.
20070280230 December 6, 2007 Park
20080013465 January 17, 2008 Yasukawa et al.
20080056257 March 6, 2008 Furukawa et al.
20080062941 March 13, 2008 Rhee et al.
20080069105 March 20, 2008 Costa et al.
20080112404 May 15, 2008 Kuri et al.
20080117823 May 22, 2008 Krishnakumar et al.
20080181161 July 31, 2008 Gi Kim et al.
20080205395 August 28, 2008 Boddapati et al.
20080219237 September 11, 2008 Thubert et al.
20080240096 October 2, 2008 Botha et al.
20090023324 January 22, 2009 Koike et al.
20090116393 May 7, 2009 Hughes et al.
20090231189 September 17, 2009 Reddy et al.
20090257432 October 15, 2009 Yamaguchi et al.
20100142446 June 10, 2010 Schlicht et al.
20110013632 January 20, 2011 Botha et al.
20120134301 May 31, 2012 Botha et al.
20130114596 May 9, 2013 Clack et al.
Foreign Patent Documents
1613023 January 2006 EP
Other references
  • Extended European Search Report for corresponding European Patent Application No. EP08744648.0, dated Jun. 18, 2012, 9 pages.
  • International Preliminary Report on Patentability for corresponding International Application No. PCT/US2008/58718, mailed Feb. 26, 2010, 17 pages.
  • International Search Report for corresponding International Application No. PCT/US2008/58718, mailed Jul. 8, 2008, 2 pages.
  • Office Action for corresponding Australian Patent Application No. 2008232640, mailed Mar. 28, 2012, 3 pages.
  • Written Opinion for corresponding International Application No. PCT/CA2008/001778, mailed Jan. 21, 2009, 6 pages.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Distributed Convergence Nodes in a Communication Network Environment,” Office Action mailed Sep. 15, 2009, for U.S. Appl. No. 12/057,289, 26 pages.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Distributed Convergence Nodes in a Communication Network Environment,” Amendment filed Jul. 20, 2012, for U.S. Appl. No. 12/724,244, 15 pages.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Distributed Convergence Nodes in a Communication Network Environment,” Notice of Allowance mailed Oct. 22, 2012, for U.S. Appl. No. 12/724,244, 8 pages.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Distributed Convergence Nodes in a Communication Network Environment,” Office Action mailed Mar. 20, 2012, for U.S. Appl. No. 12/724,244, 39 pages.
  • Higgins et al., “Tunneling Multicast Traffic Through Non-Multicast-Aware Networks and Encryption Devices,” MILCOM 2001, Proceedings. Communications for Network-Centric Operations: Creating the Information Force, McLean, VA, Oct. 28-30, 2001; IEEE Military Communications Conference, New York, NY: IEEE, US, vol. 1, Oct. 28, 2001, pp. 296-300.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Supernodes in a Communication Network Environment,” U.S. Appl. No. 60/908,878, filed Mar. 29, 2007, 30 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Restriction Requirement mailed Aug. 6, 2008, for U.S. Appl. No. 10/977,115, 7 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Office Action mailed May 14, 2009, for U.S. Appl. No. 10/977,115, 13 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Office Action mailed Dec. 1, 2009, for U.S. Appl. No. 10/977,115, 15 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Notice of Panel Decision from Pre-Appeal Brief review mailed Jun. 9, 2010, for U.S. Appl. No. 10/977,115, 2 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Office Action mailed Aug. 18, 2010, for U.S. Appl. No. 10/977,115, 15 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Office Action mailed Apr. 26, 2011, for U.S. Appl. No. 10/977,115, 15 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Notice of Allowance mailed Dec. 7, 2011, for U.S. Appl. No. 10/977,115, 3 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Supplemental Notice of Allowability mailed Dec. 28, 2011, for U.S. Appl. No. 10/977,115, 3 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Office Action mailed Jul. 23, 2013, for U.S. Appl. No. 13/348,459, 17 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Communications System and Method,” Amendment filed Oct. 23, 2013, for U.S. Appl. No. 13/348,459, 23 pages.
  • Botha et al., “Wide Area Voice Environment Multi-Channel Conferencing System,” U.S. Appl. No. 60/516,233, filed Oct. 31, 2003, 32 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Amendment filed Nov. 17, 2011, for U.S. Appl. No. 12/494,728, 24 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Amendment filed Jul. 27, 2012, for U.S. Appl. No. 12/494,728, 15 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Notice of Allowance mailed Aug. 21, 2012, for U.S. Appl. No. 12/494,728, 13 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Office Action mailed Jul. 20, 2011, for U.S. Appl. No. 12/494,728, 20 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Office Action mailed Jan. 27, 2012, for U.S. Appl. No. 12/494,728, 31 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” Preliminary Amendment filed Sep. 15, 2009, for U.S. Appl. No. 12/494,728, 9 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” U.S. Appl. No. 61/077,413, filed Jul. 1, 2008, 20 pages.
  • Clack et al., “Method, Apparatus, System, and Article of Manufacture for Reliable Low-Bandwidth Information Delivery Across Mixed-Mode Unicast and Multicast Networks,” U.S. Appl. No. 61/101,466, filed Sep. 30, 2008, 36 pages.
  • European Search Report, dated May 28, 2013, for Application No. 09774295.1, 6 pages.
  • Girod et al., “A Reliable Multicast Mechanism for Sensor Network Applications,” Center for Embedded Networked Sensing, 12 pages.
  • Twisted Pair Solutions, LLC, “Envoy™—Flexible Information Delivery for CallManager Systems,” Copyright 2002.
  • Twisted Pair Solutions, LLC, “Success Story: The United States Air Force Global Hawk Team Sets Up Instant Communications Using WAVE,” Jun. 24, 2003, 2 pages.
  • Twisted Pair Solutions, LLC, “Success Story: The United States Coast Guard uses WAVE to Enhance and Extend the Reach of its Radio Communications System,” Jun. 24, 2003, 4 pages.
  • Twisted Pair Solutions, LLC, “Success Story: USDA Forest Service:: Land Mobile Radio Over IP with Wave,” Jun. 24, 2003, 2 pages.
  • Twisted Pair Solutions, LLC, “TPS Article #67—FAQ: Can I use WAVE as a conferencing system for regular phone calls?” Copyright 2002, retrieved Feb. 5, 2004, from http://www.twistpair.com/support/viewarticle.asp?id=67, 3 pages.
  • Twisted Pair Solutions, LLC, “WAVE—Scalable Instant Communications,” Copyright 2002, retrieved Feb. 5, 2004, from http://www.twistpair.com/products/wave/default.asp.
  • Twisted Pair Solutions, LLC, “WAVE—Scalable Instant Communications: Frequently Asked Questions,” Copyright 2002, retrieved Feb. 5, 2004, from http://www.twistpair.com/products/wave/faq.asp, 3 pages.
  • Twisted Pair Solutions, LLC, “WAVE Data Sheet,” Copyright 2003.
  • Twisted Pair Solutions, LLC, “WAVE: Conferencing Features of the WAVE Media Server,” Mar. 1, 2003, 2 pages.
  • Twisted Pair Solutions, LLC, “WAVE: Solution Brief—Conferencing,” Copyright 2003, 4 pages.
  • Twisted Pair Solutions, LLC, “WAVE: Solution Brief—Hoot & Holler,” Copyright 2003, 4 pages.
  • Twisted Pair Solutions, LLC, “WAVE: Solution Brief—Interoperability,” Copyright 2003, 4 pages.
  • Twisted Pair Solutions, LLC, “WAVE: Solution Brief—LMR Integration,” Copyright 2003, 4 pages.
  • Twisted Pair Solutions, LLC, “Wave: Using WAVE as a Hoot & Holler System,” Feb. 4, 2003, 2 pages.
  • Twisted Pair Solutions, LLC, “WAVE . . . is here!” retrieved Mar. 4, 2004, from http://www.twistpair.com/default.asp.
  • Botha et al., “Method, Apparatus, System, and Article of Manufacture for Providing Distributed Convergence Nodes in a Communication Network Environment,” Office Action mailed Nov. 6, 2013, for U.S. Appl. No. 13/348,459, 19 pages.
Patent History
Patent number: 8787383
Type: Grant
Filed: Jan 16, 2013
Date of Patent: Jul 22, 2014
Patent Publication Number: 20130188639
Assignee: Twisted Pair Solutions, Inc. (Seattle, WA)
Inventors: Shaun Botha (Seattle, WA), Mark D. Bertoglio (Seattle, WA)
Primary Examiner: Jay P Patel
Application Number: 13/743,142