Real-time proxies

An arrangement for real-time data transmission through data communication networks is disclosed. The arrangement allows for real-time communication between applications located in different internal networks protected by firewalls by means of representing the applications by proxies and establishing TCP channels towards an intermediate proxy server localized outside the firewalls. A set of parameters residing in the server determines i.a. the number of required TCP channels based on the ratio of measured bandwidth between the data flow directions.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

The present invention is related to real-time data transmission through data communication networks. In particular the present invention discloses an arrangement and a method for high reliability two ways communication in real-time through one or more firewalls advantageously by way of using HTTP/HTTPS protocols and the use of such an arrangement and method.

BACKGROUND OF THE INVENTION

To protect a PC or network against unauthorized intrusion (hackers), firewalls are used to control all communication. More and more companies and private users install firewalls in order to protect their network. Firewalls do not know all communication protocols on the Internet, so it is necessary with a proxy to let certain protocols through the firewalls. This is in particular the case when the networks are NATed (Network Address Translation, i.e. private addresses are used on the LAN segment). The firewalls today do not handle real-time data wells. With the increased usage of such as IP telephony and real-time online games there is a demand for applications that let such traffic through firewalls in a secure way.

There are many different proposals for solutions within this area, however, currently no good solutions exist and no one seems to be on the way: The big vendors are looking into firewall control protocols that let the applications open and close ports or gates on the firewall. Such solutions have two serious drawbacks. One is that certificates have to be distributed to all applications that are allowed to perform such operations; another drawback is that the security is strongly degraded according to the number of applications that are allowed to perform such operations.

Some of these solutions are mentioned in the following.

  • 1. Open up for all ports over 1024 to all computers behind the FW. This will work if NAT isn't enabled though to a high price due to that hackers from the outside can easily attack all computers behind the FW on all ports over 1024.
  • 2. Use real-time proxies located in the FW's DMZ. The drawback is that some operator or corporate personnel etc. has to buy install and configure software for every real-time application somebody behind the FW would like to use.
  • 3. Make use of firewall control protocols. Several groups work on standardizing protocols that allow client applications to open and close ports on the FW among others:
    • a) The MIDCOM group (Middlebox Communication) is an IETF group standardizing such protocols. Their goal is to evaluate different proposals and use the best one as the official standard.
    • b) An FCP (Firewall Control Protocol) is developed by Netscreen and Dynamicsoft.
    • c) Another FCP protocol is being developed by Netscreen, Dynamicsoft, Microsoft and Checkpoint. This protocol might be the one adopted by MIDCOM. However, standardizing is extremely slow. The background is probably three-parted:
      • (i) All such solutions require key and certificate distribution to everybody opening and closing FW ports. This is a huge problem and the reason why Internet payments solutions aren't widely deployed.
      • (ii) A security hole is opened.
      • (iii) Many of the big firewall vendors don't want to introduce such solutions, partly because of the two reasons mentioned above. Another reason is probably that their business case in this case is threatened. The FW logic is then partly moved from the FW to the FW clients making the FW thinner.
    • d) Microsoft has initiated a protocol called UpnP (Universal Plug and Play), which is supported by many PC periphery vendors. This protocol has the same drawbacks as mentioned above. Though, if used in combination with proprietary signaling and only allowing clients on the inside to open up ports, it might get some marked penetration. Corporations and ISPs will, however, never use it due to reduced security.
    • e) SOCKS is a protocol that has existed for a long time and can be used for FW traversal. The problems with this protocol as well as the previous one, UPnP, are as described in connection with the FCP protocol developed by Netscreen, Dynamicsoft, Microsoft and Checkpoint.
    • f) STUN requires that the FWs must open for UDP traffic from the inside to the outside as well as that responses on the same message must be opened for from the outside to the inside. None of these are common practice to open for in the FW.
  • 4) Separate real-time and data networks. The drawback is that it is expensive to set up and maintain two separate networks instead of one.

Arrangements showing a sender receiver arrangement for two-ways communication in real time through one or more firewalls wherein the arrangements includes at least a real tunnel client behind a firewall and a NAT is known from the following US publications: U.S. Pat. No. 6,687,245 B2 (Fangman et al.), US 2003/0093563 A1 (Young et al.) and US 2002/0150083 A1 (Fangman et al.), however all these publications make use of firewall control protocols for opening and closing ports. Hence the problem regarding opening and closing of ports or the use of dedicated hardware.

From the foregoing it should be evident that there is a need for a reliable and secure solution facilitating real time bidirectional communication without the need for opening of ports in firewalls or dedicated hardware.

SUMMARY OF THE INVENTION

It is an object of the present invention to provide an arrangement that eliminates the drawbacks described above. The features defined in the independent claim enclosed characterize this arrangement.

In particular, the present invention discloses a sender, receiver, arrangement for high reliability two ways data communication in real time, through one or more firewalls wherein that the arrangement comprises at least a real tunnel client behind a firewall and a RealTunnel server or a media engine on the outside of said firewall.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to make the invention more readily understandable, the discussion that follows will refer to the accompanying drawings.

FIG. 1 is an overview of the elements involved in a preferred embodiment of the present invention.

FIG. 2 is a Block diagram for the present invention.

FIG. 3 illustrates possible media and signaling routes for the present invention.

FIG. 4 illustrates a detailed depiction of the protocols used.

FIG. 5 shows the protocol architecture.

FIG. 6 illustrates how RealTunnel tunnels RTP in TCP.

FIG. 7 is an arrangement for the HTTP/TCP connection transmitting data from the client to the Media Engine.

FIG. 8 is an arrangement for the HTTP/TCP connection transmitting data from the Media Engine to the client.

FIG. 9: A description of the multiplexing arrangement when three channels are spawn in each direction: One combined control and media channel and two pure media channels.

FIG. 10: Modified ack behavior on TCP server side.

FIG. 11: Pseudo Messenger signaling at registration time.

FIG. 12: Call setup, two users on system

FIG. 13: A standard conferencing setup.

FIG. 14 Copy of is an overview of the elements involved in a preferred embodiment of the present invention.

FIG. 15: Simple voice and video conferencing setup.

BEST EMBODIMENT OF THE PRESENT INVENTION

In the following, the assumptions of the present invention will be described, followed by the specific aspects of the invention further, for readability, in the following the wording HTTP is to be understood as HTTP or HTTPS. Still further whenever the wording HTTP or HTTPS appears it is to be understood that HTTP/HTTPS protocols are encapsulated in TCP protocols hence TCP HTTP/HTTPS “packets” are by definition TCP packets.

The RealTunnel client can be deployed in many ways:

  • 1. As an application at an end-user's desktop.
  • 2. Integrated with applications, typically gaming applications.
  • 3. As an application in the FW's DMZ in a corporate setting. The application and FW could be located in any of the following ways:
    • a. As a corporate FW DMZ application.
    • b. As a residential FW DMZ application.
    • c. As an operator/ISP FW DMZ applications.
  • 4. Integrated with a FW. This could be any type of FW including a personal FW

The key concept is in the first way. In this way no external personnel has to configure or deal with any configuration of any FW, network etc. Another interesting scenario is the second one. In this second case, still no configuration of any FW, network etc. has to be done.

The RealTunnel client will be designed in a modular way such that new applications and or protocols rapidly can be supported.

The present invention will advantageously work in two different scenarios: The main approach is to let a third party like MSN net or an external SIP registrar host the actual users and instant messaging services and the disclosed arrangement reside in the middle. I.e. the arrangement according to the present invention only does what is required in order to handle the media and NAT parts and not the “signaling” and the main instant messaging functionality parts. The first release supports XP Messenger used in combination with Passport. An alternative approach for consideration is that everything is hosted by the IP right owner according to the present invention, the instant messaging services, potentially a SIP client etc. Such an approach would however require much more development time.

UDP is nearly almost used for two-way real-time connections. HTTP is running over TCP. TCP's drawback is that it doesn't maintain the good real-time characteristics of UDP. We have identified a mechanism for simulating UDP behavior on TCP.

The users may pay for different bandwidth classes. The RealTunnel according to the present invention can perform policing and control the actual bandwidth used by the users by counting IP packets. A willingness to pay mechanism might be provided giving those with high willingness to pay better QoS than other. This product may also support billing, pre and post paid.

A provisioning system is included for operations like: Adding user accounts, modifying user accounts, deleting these, adding balance on user account, check current registrations, see current calls etc. etc.

The goal of the present invention is to let HTTP traffic pass through firewalls and then control all HTTP RealTunnel clients and servers, the present invention does not have to be fully HTTP compliant. This means that all that has to be done to be HTTP compliant in order to bypass firewalls. The HTTP server has many possibilities for being optimized.

High Level Description

FIG. 1 provides a high level schematically overview of the system. RTS indicates is the signaling servers and ME the media servers. The computer at the higher left corner is located directly on the Internet and typically might be using Messenger and be connected directly to a SIP registrar or an equivalent passport server. The other clients are located behind NAT devices and have to use RealTunnel SW. These are connected to RTS servers.

FIG. 1 can be expanded into two different high level parts as shown in FIG. 2.

FIG. 2 shows the key processes, with the present invention shown as Real Tunnel Client, DB and Real Tunnel Server Park.

The PC application is configured to send data to the RealTunnel client, and not the unavailable receiver on Internet. If the RealTunnel client is behind an HTTP proxy, the RealTunnel client automatically is configured with the HTTP proxy address and port. These data automatically are extracted from Internet Explorer settings, if IE is installed on the client computer. If not, the user must manually enter the data.

The RealTunnel clients will register at start-up towards the RealTunnel server according to the present invention.

Media and Signaling Paths

FIG. 3 describes the situation, with TCP as full lines, UDP as dotted, and HTTP carried as fat lines. SIP is blue and RTP is red.

Several call scenarios exist:

  • 1. Messenger A and Messenger B are located behind different firewalls. When they place a call, the signaling will go back and forth to the RealTunnel server and the media flow back and forth to the ME, see 1).
  • 2. Messenger A and Messenger B are located behind the same firewall on the same LAN. When they place a call, the signaling flow will go back and forth to the RealTunnel server. The media flow however will go directly between the two Messenger applications, the alternative approach in the figure, see 2).
  • 3. When one of the subscribers such as Messenger A or B wants to place a call to a pure Passport or SIP registrar subscriber the signaling and media flow will be as depicted in the FIG. 3).

In order to replace Messenger's SIP and RTP streams, one must have a SOCKS server to receive the TCP signaling, listen for messages with SIP IP RTP/UDP addresses in it, and replace these.

FIG. 4 describes the RealTunnel client in more detail. While forwarding the Passport or SIP registrar messages, it must look for messages sending over SIP addresses, and replace it with the local RealTunnel client SIP address.

The SIP proxy must do the same, gain up the SDP in INVITEs, replacing the media (RTP) addresses with the local RTP proxies.

FIG. 4 shows a detailed depiction of the protocols that is used.

Protocol Architecture

See FIG. 5 for a description of the protocol architecture. The RealTunnel client and server only need minor MSNP, RTP and RTCP knowledge because this information basically transparently is passed on. FIG. 5 outlines the protocol architecture in particular when Messenger is the supported application, but other applications will also be supported in the same way. Messenger normally performs the proprietary MSNP signaling on TCP towards Passport and SIP signaling on UDP directly between the Messenger endpoints. Messenger may be configured to work towards an HTTP proxy or a SOCKS server for MSNP firewall traversal. The present invention take advantage of the SOCKS Messenger functionality.

RTP over TCP

When tunneling RTP in TCP, the RealTunnel adds a three byte header field (address) in the beginning of the TCP data field as shown in FIG. 6.

The three byte RT header states the length of the RTP packet (two bytes) and whether the payload is RTP or RTCP (one byte). On good networks as well as when Nagle's algorithm is disabled, one TCP packet usually contains one RT header and one RTP packet as indicated in FIG. 6. One TCP packet can however contain several RT and corresponding RTP packets. One TCP packet can also contain fragments of RTP packets; it is the TCP stack that decides how to fragment the RTP packets.

HTTP Support

HTTP 1.0 Support

HTTP proxies such as squid only allow one outstanding request, i.e. one GET or POST from the client towards the server. The RESPONSE must be processed before a new GET or POST might be retransmitted. This is the reason why it is important to setup two unidirectional HTTP/TCP connections when handling real-time data, i.e. one connection for transmitting data and one connection for receiving data. It is important to keep the GET or POST outstanding as long time as possible to save bandwidth and processing power. I.e., to spawn a minimum of two TCP channels from each RealTunnel client towards a Media Engine. One is dedicated for data transmitted from the client towards the server, see FIG. 7 and one from the server to the client, see FIG. 8. When sending new GET/POSTs from the RealTunnel client to the Media Engine the TCP connection is reused when and as long as possible. It is initiated and established a new pair of HTTP/TCP connections BEFORE one gets timeout on the previous connection pair. Accordingly transmission and receiving on the new connection is started before closing down the old connections. In this a smooth migration from the old connections to the new connections is maintained.

HTTP data as shown in FIG. 8 is pure TCP/IP data, i.e. no HTTP header or specific HTTP information is included.

HTTPS Support

HTTPS currently is enabled. The advantage by HTTPS is that on many networks it is possible to use TCP directly after the normal setup procedure.

The setup procedure on networks where HTTPS proxies are required is that the HTTP client, the RealTunnel client, sends an HTTP connect message towards the HTTP proxy. The HTTP proxy then sends a response back. In some cases plain TCP can then be used directly between the RealTunnel client, the HTTP proxy, the RealTunnel server and the Media Engine. In other cases SSL have to be used. SSL adds however less overhead than HTTP, at least when encryption can be turned off on the SSL layer.

Authentication of Sessions

Before each new connection is setup from any TCP/HTTP client towards a RealTunnel server or an ME, an authentication procedure before accepting the connection on the application level is performed. According to the present invention one advantageously has to wait for the first portion of data that includes a user id and a hashed password before accepting the connection on the RealTunnel server and ME side. When the connection is setup, one may still check the user id and password on each datagram.

QoS Mechanisms

Improving TCP Real-Time Characteristics by Caching and Dropping Packets

The RealTunnel have implemented several mechanisms to make TCP get UDP behavior. One of the features is based on caching and dropping techniques. The RealTunnel system has two levels of cache and possibilities for dropping packets.

Cache level 1: Cache level 1 is the TCP sender buffer. Currently the RealTunnel components don't have direct access to this buffer, but this can be implemented. Currently RealTunnel only is able to detect when the TCP sender buffer is full, not to what degree it is partly full. The TCP socket write function always returns the number of bytes successfully written down to the socket. Since the application also always knows how many bytes it tried to transmit, it knows when cache level 1 is full. The optimal cache size can be dependent on such as throughput, network condition etc. The ME has currently configured this value fixed to be 8 Kbytes. The RealTunnel client has set this value fixed to 4 Kbytes. The ME drops packets when cache level 1 is reached.

Cache level 2: It is possible to add a cache level only contained within the application. Currently the RealTunnel client has implemented this cache level. This cache level makes it easier to manage RTP packets. Since RTP packets are dropped when cache level 1 is reached, cache level 2 is used for being able to delete whole RTP packets within the cache level 2 buffer. The RealTunnel client has full control of the fill level of cache 2.

Drop packets: Packets are dropped as described above when cache level 1 and 2 is exceeded.

Signaling information is cached with 64 Kbytes buffers since it is important that this information is forwarded.

Mechanisms for Increasing Bandwidth in Congested Networks

A problem when using TCP as a bearer of real-time data is that one TCP connection not always provides the necessary throughput. This is in particular a problem when transmitting voice codec data. A solution to this problem is to spawn several TCP connections when one TCP connection not is sufficient. It is here assumed that the new TCP connections are initiated and spawned by the RealTunnel client.

Different ways of identifying when to spawn new TCP connections:

  • 1. Base it on cache parameters.
  • 2. Base it on dropped packets.
  • 3. Measure bandwidth, i.e. base it on transmitted bytes vs. received bytes.
  • 4. Base it on RTCP messages.
  • 5. Base it on the TCP window size.
  • 6. Base it on roundtrip time.
  • 7. Any combination of the above.

Base it on the TCP window size: This feature or possibility is currently not implemented. It is possible to get direct access to the TCP window size when operating on the OS (Operating System) level. For RealTunnel clients on client computers that would typically imply making a driver. The TCP window size is the minimum of the sender's congestion window size and the receiver's window size. The TCP window size can be used to decide when to spawn new TCP connections. Typically new channels can be spawn when the TCP window size decreases in size since this might indicate packet loss and a degraded network link.

Base it on RTCP messages: RTCP reports many parameters that can be used to decide when to spawn new TCP connections. Such parameters are roundtrip time, total number of packets sent vs. received, total number of bytes sent vs. received, packets lost during the last time interval, total number of packets lost etc. New channels can be spawn when e.g. the roundtrip time increase and or when number of lost packets increase.

Base it on roundtrip time: When the roundtrip time between the RealTunnel client and server increase, new TCP connections can be spawn. Increased roundtrip time indicates degradation of the network link.

RTCP messages might be a good choice since the RTCP reports are comprehensive and accurate. Cache and drop level is a good alternative. The cache level can then be used for low threshold levels and the drop level for higher threshold levels.

The following protocol is implemented for the purpose of spawning new TCP connections:

The multi protocol between the RealTunnel client and RealTunnel server is defined with the following messages:

  • 1. RealTunnel Server->RealTunnel client: SetMaxNoOfConnections [STATIC]
  • 2. RealTunnel server->RealTunnel client: Epsilon [STATIC]

A similar protocol exists between the RealTunnel server and the Media Engine.

The RealTunnel server reads a configuration file with the parameters ranging from 1 to 2 in the list above. These parameters will be read by the RealTunnel server at start-up and directly passed on to the RealTunnel client and the ME, used by both the ME and the RealTunnel client. SetMaxNoOfConnections states how many TCP data connections that maximum are allowed to be used. This parameter is also used both in the RealTunnel client and the Media Engine. Epsilon states how sensitive the client shall be for spawning new TCP connections. An overall picture of the transmitting and receiving arrangement for three channels (TCP connections) is Application 1 and Application 2 is a RealTunnel client or a Media Engine. Note that number of channels on the originating (Application 1) is independent of the number on the terminating side (Application 2).

It is also possible to reduce the number of TCP connection if the network condition improves.

Different ways of identifying when to reduce new TCP connections:

  • 1. Base it on cache parameters.
  • 2. Base it on dropped packets.
  • 3. Measure bandwidth, i.e. base it on transmitted bytes vs. received bytes.
  • 4. Base it on RTCP messages.
  • 5. Base it on the TCP window size.
  • 6. Base it on roundtrip time.
  • 7. Any combination of the above.

Base it on the TCP window size: The TCP window size can be used to decide when to reduce the number of TCP connections. Typically channel(s) can be removed when the TCP window size increase in size since this might indicate a better TCP connection.

Base it on RTCP messages: Channels can be removed when e.g. the roundtrip time decrease and or when number of lost packets decrease.

Base it on roundtrip time: When the roundtrip time between the RealTunnel client and server decrease, TCP connections can be removed. Decreased roundtrip time indicates an improved network link.

RTCP messages might be a good choice since the RTCP reports are comprehensive and accurate. Cache and drop level is a good alternative. The cache level can then be used for low threshold levels and the drop level for higher threshold levels.

An additional ReduceEpsilon message in the the multi protocol between the RealTunnel client and RealTunnel server should be added. This number indicates threshold level for how easily the existing TCP connections should be removed when the network condition improves.

When HTTPS is used instead of HTTP is might be convenient to setup dedicated transmitter and receiver channels.

Cache

It is possible to spawn new TCP connections based on any of the following criteria:

  • 1. Cache level 1 is full.
  • 2. Cache level 2 is full.
  • 3. Cache level 1 has met a certain threshold level.
  • 4. Cache level 2 has met a certain threshold level.

It is possible to use the same scheme for reducing the number of TCP connections if the network condition improves.

Drop

It is possible to spawn new TCP connections based on the following criteria:

  • 1. The current drop rate. I.e. the drop rate e.g. the last second.
  • 2. A function of the current drop rate and the previous drop rate(s) weighting the most recent ones highest.

It is possible to use the same scheme for reducing the number of TCP connections if the network condition improves.

Measure Bandwidth

It is possible to spawn new TCP connections when the transmitting rate is higher than the receiving rate. This section describes a protocol to use for this purpose.

The multi protocol between the RealTunnel client and RealTunnel server is defined with the following messages:

The multi protocol between the RealTunnel client and the RealTunnel server is defined with the following additional messages:

  • 1. RealTunnel server->RealTunnel client: SetPollInterval [STATIC]
  • 2. Media Engine->RealTunnel client: PassOnBw [DYNAMIC]
  • 3. A similar protocol exists between the RealTunnel server and the Media Engine.

The RealTunnel server reads a configuration file with parameter 1. This parameter is read by the RealTunnel server at start-up and directly passed on to the RealTunnel client and the ME. Parameter 5 (PassOnBw) is dynamic and passed on each time the ME has calculated received and transmitted bandwidth for a certain period, explained further below. SetPollInterval indicates how often the RealTunnel client and ME shall calculate transmitted bandwidth.

At certain intervals (the poll interval) the sender and receiver calculate both transmitted bandwidth and received bandwidth. At each poll interval the Media Engine sends the calculated received and transmitted bandwidth over to the RealTunnel client. The RealTunnel client accordingly has calculated transmitted and received bandwidth for the same period. The RealTunnel client side calculates the new number of channels based on the following algorithm:
if (totalTransmittingBw/totalReceivingBw>epsilon) spawn one new TCP connection

Where delta e.g. might be 1, 04. This means that the senders transmit approximately 4% more than the receivers get. This multi control protocol is designed stateless in order to save complexity and processing power.

It is possible to use the same scheme for reducing the number of TCP connections if the network condition improves.

Server Initiated Spawning of new TCP Connections

In some cases it might be helpful also to let the server side (ME), initiate new TCP connections. But since TCP connections always are initiated on the client side, this means that the RealTunnel client must get a message from the ME, telling the RealTunnel client to spawn a new TCP connection.

This mechanism is advantageous when the bandwidth from the RealTunnel client to the ME is sufficient, but the bandwidth from the ME to the RealTunnel suffers.

The ME must send a message to the RealTunnel client stating, spawn one or several new TCP connections. The message may alternatively be sent from the ME to the RealTunnel server and then to the RealTunnel client.

Server Initiated Reducing Number of TCP Connections

According to the scheme above, the server may also reduce the number of TCP connections.

Scheme for Maximizing Throughput on Steady State Connections

Assuming no packet loss, the following scheme is optimal for optimizing TCP throughput:

Send subsequent number of RTP packets on the same TCP connection at MAXIMUM according to:
RoundTrip Delay/[Number of TCP connections*ms between each RTP packet]

The sender must at certain intervals; it can be the poll interval previously mentioned, check the roundtrip delay and transmit according to this scheme.

Improving Throughput by Using the Best TCP Connections

When packet loss is detected it is important NOT to use congested TCP connections, typically that is TCP connections that have experienced packet loss. Therefore the sender always should rank the available TCP connections. The rank method can be any of the following:

  • 1. Base it on cache parameters; see the section; Mechanisms for increasing bandwidth in congested networks; Cache.
  • 2. Base it on dropped packets, see section; Mechanisms for increasing bandwidth in congested networks; Drop.
  • 3. Base it on transmitted bytes vs. received bytes; see section; Mechanisms for increasing bandwidth in congested networks; Measure Bandwidth.
  • 4. Base it on the RTCP messages, see section; Mechanisms for increasing bandwidth in congested networks.
  • 5. Base it on the TCP window size, see section; Mechanisms for increasing bandwidth in congested networks.
  • 6. Base it on roundtrip time, see section; Mechanisms for increasing bandwidth in congested networks.
  • 7. Any combination of the above.

RTCP messages might be a good choice since the RTCP reports are comprehensive and accurate.

The sender will always go through the available TCP connections in a round robin matter. When the first non-congested connection is found, the sender transmits the desired data on that particular TCP connection.

If only congested TCP connections are found, the least congested TCP connection will be used.

If all the TCP connections are congested according to the criteria chosen, the following RTP packets will be dropped until at least one TCP connection can pass on one whole RTP packet.

Theoretical Background

Suppose the capacity of a single bottleneck between a TCP client and TCP server is C, and the initial one TCP connection gets a throughput x. Then if (n−1) new connections are spawned, the n connections will each receive xC/(nx+C−x).

(Note that if n=1, then of course the answer is x.)

A simple example to illustrate:

Suppose a router comprise only TCP traffic, let's say 100 TCP connections are running through this router. The RealTunnel client has one single TCP connection through this router with a steady state throughput of 8 kbit/s. If the RealTunnel client sets up one additional TCP connection through this router, the new combined throughput for these two connections will be:
2*[8*1000/*(2*8+1000−8)]=15,87 kbit/s.
Other Considerations and Optimizations

When the roundtrip delay increases, and Nagle's algorithm is enabled (it is enabled on almost all TCP stacks by default), larger and larger TCP packets are sent to preserve the network condition. When only using one TCP connection between a RealTunnel client and a Media Engine, how many RTP packets are packed into one TCP packet is superfluous. The reason is that TCP when using Nagle's algorithm doesn't send ANY packet before the previous packet is acknowledged (or a full TCP packet is ready to be sent). In case of a packet loss, this means that there will be a full stop in the communication between the RealTunnel client and the ME until the lost TCP packet (and corresponding RTP packets) is resent and acknowledged.

When using several TCP connections the situation is different. Then one also must keep in mind which capabilities the actual applications like Messenger have for coping with lost RTP packets. When it comes to Messenger it can handle up to 3 consecutive lost RTP packets. This implies that at maximum three RTP packets should be packed into one TCP packet in case one TCP packet is lost. To be sure that this is the case, Nagle's algorithm should be turned off. Nagle's algorithm may be turned off both in the RealTunnel clients and in the ME.

Improving TCP Throughput and Characteristics by Modifying TCP Stack

Server Side TCP ACK Improvement

Since the server's side is controlled, i.e. the Media Engine, there is a possibility to improve the ME TCP stack.

When TCP packets are lost, the TCP sender will retransmit the lost packet(s) and won't send new packets before the lost packet(s) are acknowledged from the TCP server side. This is no good behavior for real-time sensitive traffic.

On the server side, i.e. the ME, an acknowledge will ALWAYS be given as though all packets are received. The TCP 32-bit acknowledge number of the TCP packet to be equal to the TCP bit sequence number of the last received TCP packet is modified when necessary.

There is no guarantee that a lost TCP segment will contain only whole RTP packets. There is a risk that a segment may contain a fraction of an RTP packet. In case such a segment is lost the problem of getting back into sync with the RTP packets is faced. This is solved by adding a fixed bit pattern (preamble) to every RTP packet. When a TCP segment with a fractional RTP packet is lost, the receiver (ME) will not find the preamble as expected. RealTunnel now enters hunt-mode. In hunt-mode a search in the byte stream for the first occurrence of the preamble will be performed. When it is found, RealTunnel is back in sync with the RTP packets. There is a risk that the preamble pattern occurs in the RTP data. If this is the case RealTunnel could mistakenly use wrong data as an RTP packet. In this case, the next RTP packet will most likely lack the preamble, and RealTunnel enters hunt-mode again. FIG. 10 shows modified ack behavior on TCP server side.

This improvement is for traffic sent from the RT client towards the ME.

Server side RTP TCP Retransmission Improvement

When on the server side (ME) it is detected that a TCP packet (segment) is lost, the same segment is not retransmitted instead, and a new RTP packet is inserted into that segment before it is retransmitted. This means that one can drop packets that are lost while keeping the receiving TCP stack happy.

A packet loss is caused by congestion (CPU or bandwidth) somewhere along the path from the server to the client. A back off strategy involves dropping random RTP packets after such a packet loss to lower the probability of another packet loss occurring.

This improvement is for traffic sent from the ME towards the RT client.

Client Side Improvement

Potentially exactly the same improvements as described in the section Server side TCP ACK improvement and the section Server side RTP TCP retransmission improvement can be applied on the client side.

SSL

SSL/TLS is designed to operate on a reliable connection oriented transport-layer (TCP). SSL/TLS will fail when used on top of RealTunnel's enhanced TCP stack, see section Server side TCP ACK improvement. This is handled in one of two different ways:

  • 1. Running the enhanced TCP stack on a port range that is used for non-SSL media only.
  • 2. Modify the SSL record protocol so that it includes a fixed preamble. This preamble is used to find the start of a new SSL record in case a TCP segment is lost. When a TCP segment is lost RealTunnel enters hunt-mode. In the hunt-mode the received byte stream is searched for the preamble. When this is found it is known where the next SSL record starts. RealTunnel also allows the SSL records to have holes in the sequence numbering, but only increasing numbers are allowed (to avoid replay attacks).
    Messenger Pseudo Signaling Flows

All the use cases in this section are abstract use cases, i.e. the real signaling flows are somehow more complicated.

Registration

This use case shows the signaling at registration time, when a Messenger client registers to the system. This will lead to a local registration in the present database according to the present invention as well as registrations towards .net. and .net's database. FIG. 11 shows the pseudo messenger signaling at the registration time.

To be able to support third party SIP servers, it is important that the RealTunnel client adds its own address to the Via and Record-Route header fields. This way, calls to the user will be routed through the RealTunnel server.

Call Setup

The two users may be registered on different RealTunnel servers, but in order to conserve resources (RealTunnel server connections); the call should not be routed through more than one RealTunnel server. This is solved by letting the system inform the calling and called RealTunnel client about which RTP-proxy to use. FIG. 12 shows a call setup for a two users system.

The session starts as the user presses “call” or similar in Messenger, which triggers an INVITE sent to the local RealTunnel client, which starts a local RTP proxy session. The listening local RTP/UDP port is reported back to the client in the forthcoming 200 OK.

The RealTunnel client asks to be connected to the called user, so that a virtual channel is opened to this one. The RealTunnel server locates the user. As the user is logged on the network, the INVITE is sent to the RealTunnel client of the called party in the HTTP session. This connects again to the calling user, opening another channel the opposite way. The calling party's RealTunnel client inserts the address of its local RTP proxy in the SDP before forwarding the INVITE to Messenger.

If the call is accepted, Messenger replies with a 200 OK (or any other response with SDP, which is notified to the RTP-proxy), which is sent back to the RealTunnel server, and passed on to the calling RealTunnel client. This now replaces the given media addresses (SDP) with those of its listening RTP proxy, and sends the modified 200 OK upstream.

Any further SIP message follows the same path.

Simple Voice and Video Conference

An enhancement of the service offering of RealTunnel is simple voice and video conference. The concept of simple voice and video conference is the same as other types of video conferencing service. Such conferences consist of a set of clients and one or more central mixing units. The task for the central mixing units is to receive media (voice and video) from all the participating clients and mix this together to a new voice and video stream that is sent back to the clients (see FIG. 13)

FIG. 13 is very similar to FIG. 1 and FIG. 14. In both scenarios media is sent to and received from a central unit. It is therefore natural to add a conferencing service to the RealTunnel solution.

The problem by adding conferencing capabilities is that video mixing require a lot of resource, and therefore becomes quite expensive. Because of the RealTunnel architecture, a simple conferencing service can be supported instead of the normal conferencing.

The only difference between a normal video conference and the simple voice video conference is that in the simple conferencing the central unit does not have mixing capabilities.

In the simple voice video conferencing the received video from the mixing unit is actually the same as the one sent from one participating clients (see FIG. 15).

The advantage of the simple voice and video conferencing service is that it is cheep because no expensive video mixing hardware is in use.

The disadvantage is that it is not possible to see the picture of more that one of the participants at the time1.
1Because of the natural fast change of individuals speaking in a voice conference, a voice mixing unit has to be added to the service so that fast changes does not become to annoying for the participants.

For the central unit to choose the originator of the video picture sent to all the other participants, one of the following algorithms can be used.

    • Time slot
    • Each participating client is given a time slot, e.g. 1 minute each
    • The one that is currently speaking
    • By checking received voice stream, the central unit can easily figure out who is currently talking, and distribute this video to all the other participants.
    • Vote
    • By using the RealTunnel client, the participating users can vote on which video they like to see. The client with the most votes becomes the originating video signal to be sent to all the other clients.
    • Pick your own
    • Via the RealTunnel client the end-user can choose which video he like to see. This differs from FIG. 15, because in this scenario all the clients do not have to receive the same video.
      Abbreviation List
  • NAT Network Address Translation
  • DMZ DeMilitarized Zone
  • HTTP Hypertext Transport Protocol
  • LAN Local Area Network
  • ME Media Engine
  • RTS RealTunnel Server
  • RTC RealTunnel Client
  • QoS Quality Of Service
  • PC Personal Computer
  • FW Firewall
  • RTP Real Time Protocol
  • RTCP Real Time Communication Protocol
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • TCP Transport Control Protocol
    Abbreviation List (Continued)
  • UPnP Universal Plug aNd Play
  • SOCKS SOCKETS. Authentication firewall traversal
  • DB DataBase
  • ISP Internet Service Provider
  • UPC United Pan-Europe Communications
  • MSN MicroSoft Network
  • MSNP MicroSoft Network Protocol
  • SIP Session Initiation Protocol
  • SDP Session Description Protocol
  • SSL/TLS Secure Sockets Layer/Transport Layer Security
  • UA User Agent
  • IE Internet Explore

Claims

1. A sender, receiver, arrangement for high reliability bidirectional data communication in real time, through one or more firewalls characterized in that the data communication is provided by at least one bidirectional HTTP/HTTPS connection associated with at least one of (a) a real tunnel client behind a firewall and (b) a NAT device with a real tunnel server, and (c) a media engine on the outside of a firewall.

2. An arrangement according to claim 1, characterized in that said real tunnel client resides in a client computer and at least one real time application is running on said computer.

3. An arrangement according to claim 1, characterized in that the real tunnel client has a two-level system for caching and a system for dropping data packets, such as TCP packets.

4. An arrangement according to claim 3, characterized in that 1-5 RTP packets are packed into one TCP packet.

5. A method for high reliability bidirectional data communication in real time through one or more firewalls or NAT devices where said method involves the use of at least a real tunnel client behind the firewall and a real tunnel server or a media engine on the outside of said firewall, characterized in that said method for data communication comprises providing at least one bidirectional HTTP/HTTPS connection, and establishing said bidirectional data communication between the real tunnel client and the media engine wherein new HTTP/HTTPS connection is established before time-out on the previous HTTP/HTTPS connection.

6. A method according to claim 5, characterized by employing a two-level system for caching and dropping data packets by the real tunnel client, where the data packets are TCP packets.

7. A method according to claim 6, characterized by establishing new TCP connections for the real tunnel client whenever at least one of the following conditions are met:

cache level 1 is full,
cache level 2 is full,
cache level 1 has reached a predetermined threshold level,
cache level 2 has reached a predetermined threshold level,
the rate of dropping packets has reached a predetermined rate dropping level,
a function of the current drop rate and the previous drop rate are satisfied,
the ratio between the total transmitted bandwidth, and the total receiving bandwidth exceed a predetermined threshold with reference to the real tunnel client.

8. A method according to claim 6, characterized by reducing TCP connections for the real tunnel client whenever one or more of the following conditions are satisfied;

cache level 1 is empty,
cache level 2 is empty,
cache level 1 has reached a predetermined threshold level,
cache level 2 has reached a predetermined threshold level,
the rate of dropping packets has reached a predetermined rate dropping level, or
a function of the current drop rate and the previous drop rate are satisfied, or
the ratio between the total transmitted bandwidth and the total receiving bandwidth exceed a predetermined threshold with reference to the real tunnel client.

9. A method according to claim 6, characterized by establishing new TCP connections by the server or media engine whenever at least one of the following conditions are satisfied:

cache level 1 is full,
cache level 2 is full,
cache level 1 has reached a predetermined threshold level,
cache level 2 has reached a predetermined threshold level,
the rate of dropping packets has reached a predetermined rate dropping level,
a function of the current drop rate and the previous drop rate are satisfied,
the ratio between the total transmitted bandwidth and the total receiving bandwidth exceed a predetermined threshold with reference to the real tunnel client.

10. A method according to claim 6, characterized by optimizing TCP throughput where a maximum number of subsequent RTP packets are transmitted on the sam TCP connection is given as:

a ratio between a round trip delay and a number of HTTP/HTTPS connections times a time interval between every single RTP packet, is checked or measured by a sender at predetermined intervals.

11. A method according to claim 6, characterized by algorithmically ranking the best available TCP connection where said ranking is based at least one of the following conditions:

cache level 1 is full;
cache level 2 is full;
cache level 1 has reached a predetermined threshold level;
cache level 2 has reached a predetermined threshold level;
the rate of dropping packets has reached a predetermined rate dropping level;
a function of the current drop rate and the previous drop rate are satisfied;
the ratio between the total transmitted bandwidth and the total receiving bandwidth exceeds a predetermined threshold with reference to the real tunnel client;
RTCP messages have a specified characteristic, and
TCP window size has a specified characteristic.

12. A method according to claim 6, characterized by packing 1-5 RTP packets into one TCP packet.

13. A method according to claim 6, characterized by finding and disabling Nagle's algorithm.

14. A method according to claim 6, characterized in that the receiver always acknowledges all packets as received, and if TCP packets are lost the receiver will modify the TCP 32-bit acknowledge number of the TCP packet to be equal to the TCP bit sequence number of the last received TCP packet.

15. A method according to claim 14 further defined by improving TCP throughput by modifying TCP stack, where the send is adding a fixed bit pattern to every RTP packet.

16. A method according to claim 15 further defined by synchronizing the sender with the RTP packets and, when a TCP segment with a fractional RTP packet is lost, the receiver initializes a search algorithm for searching a first occurrence of a fixed bit pattern.

17. A method according to claim 5 further defined by improving TCP throughput where the Media Engine, as a sender, detects that a TCP packet (segment) is lost, a new RTP packet is inserted into the TCP packet that has to be resent.

Patent History
Publication number: 20050108411
Type: Application
Filed: Sep 1, 2004
Publication Date: May 19, 2005
Inventors: Kevin Kliland (Oslo), Knut Farner (Lier)
Application Number: 10/931,492
Classifications
Current U.S. Class: 709/230.000