Abstract: A network device includes input logic and output logic. The input logic receives multiple packets, where each of the multiple packets has a variable length, and generates a first error detection code for one of the received multiple packets. The input logic further fragments the one of the variable length packets into one or more fixed length cells, where the fragmentation produces a cell of the one or more fixed length cells that includes unused overhead bytes that fill up the cell beyond a last portion of the fragmented one of the variable length packets, and selectively inserts the first error detection code into the overhead bytes. The input logic also forwards the one or more fixed length cells towards the output logic of the network device.
Abstract: A first network device and a second network device for forwarding data units are included in a network. The second network device is configured to receive data units from the first network device via an output interface from the first network device. Each of the network devices is further configured to form a first value derived from information pertaining to a received data unit, perform a function on the first value to provide a second value, wherein the function of the first network device is different from the function of the second network device when forwarding a same data unit, select an output interface based on the second value, and forward a received data unit via an interface.
Abstract: A method and apparatus for delivering variable bit rate media files delivers media files to client systems. The media server can dynamically detect when a client can accept a different bit rate version of a media content. The media server can smoothly switch between different bit rate versions of the media content as it delivers the media content to client systems. A client system can also request different bit rate versions of a media content while it is playing the media content.
Abstract: In one embodiment, a method includes receiving on a network side of a data center network a migration notification related to migration of a virtual resource from a source host device to a target host device. The source host device and the target host device can be on a server side of the data center network different than the network side of the data center network. The virtual resource can be logically defined at the source host device. The method can also include defining, before migration of the virtual resource is completed, an identifier representing a mapping of the virtual resource to the target host device in response to the migration notification. The defining can be performed on the network side of the data center network.
Abstract: A device receives distances between an access point, located on a floor of a building, and other access points located on the same floor, and determines, based on the distances, relative location information associated with the access point, where the relative location information provides a location of the access point relative to the other access points. The device also determines, using a triangulation method, an actual location of the access point based on the relative location information. The device further maps the actual location of the access point to a floor plan of the floor, and displays the floor plan with the mapped actual location of the access point.
Abstract: A power system includes a switch, a capacitor and a comparator circuit. The power system receives a signal to turn off power supplied to the power system, turns off the switch that is used to supply power to the system and discharges the capacitor. The power system also compares a voltage across the discharging capacitor to a threshold voltage value, and turns on the switch to allow power to be supplied to the power system when the compared voltage across the discharging capacitor equals the threshold voltage value.
Abstract: In some embodiments, an apparatus comprises a processing module, disposed within a first switch fabric element, configured to detect a second switch fabric element having a routing module when the second switch fabric element is operatively coupled to the first switch fabric element. The processing module is configured to define a virtual processing module configured to be operatively coupled to the second switch fabric element. The virtual processing module is configured to receive a request from the second switch fabric element for forwarding information and the virtual processing module is configured to send the forwarding information to the routing module.
Abstract: A lifetime-based memory management scheme is described, whereby a network device first determines an expected lifetime for received packets, which correlates to the expected output queue latency time of the packets. The network device then buffers packets having matching lifetimes to memory pages dedicated to that lifetime.
Type:
Grant
Filed:
December 11, 2009
Date of Patent:
May 22, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Srinivas Perla, David J. Ofelt, Jon Losee
Abstract: A system includes a cable having a first end portion, a second end portion and a cable display module mechanically coupled to the first end portion of the cable. The cable has at least one optical fiber extending through the cable between the first end portion and the second end portion. The at least one optical fiber is configured to optically couple a first chassis with a second chassis when the first end portion of the cable is mechanically coupled to the first chassis and the second end portion of the cable is mechanically coupled to the second chassis. The cable display module is configured to be electrically coupled to the first chassis when the first end portion of the cable is mechanically coupled to the first chassis such that the cable display module receives from the first chassis an electrical signal representing an identifier associated with the second chassis.
Abstract: A device is configured to receive authorization information from a first network device and to receive a request that data units sent to a destination device contain authorization information, where the request is received from a second network device. The device is configured to assemble authorized data units by associating the authorization information with content intended for a destination device, where the content can be exchanged with the destination device during authorized communication. The device is configured to provide at least one of the authorized data units to the second network device so that the second network device can establish the authorized communication between the device and the destination device.
Abstract: In general, the principles of this invention are directed to techniques of locally caching endpoint security information. In particular, a local access module caches endpoint security information maintained by a remote server. When a user attempts to access a network resource through an endpoint device, the endpoint device sends authentication information and health information to the local access module. When the local access module receives the authentication information and the health information, the local access module controls access to the network resource based on the cached endpoint security information, the authentication information, and a security state of the endpoint device described by the health information.
Abstract: Methods of screening incoming packets are provided. A first firewall detects a tunnel formation. A second firewall maintains a list of open firewall sessions. Each tunnel has one or more associated firewall sessions. The first firewall detects variable situations, such as when the tunnel is torn down, and notifies the second firewall so that, for example, the second firewall can act to clear an associated firewall session from the firewall session list. Incoming packets that are associated with firewall sessions that have been cleared from the firewall session list may not be passed through the second firewall.
Abstract: Virtual Private Networks (VPNs) are supported in which customers may use popular internet gateway protocol (IGPs) without the need to convert such IGPs, running on customer devices to a single protocol, such as the border gateway protocol (BGP). Scaling problems, which might otherwise occur when multiple instances of an IGP flood link state information, are avoided by using a flooding topology which is smaller than a forwarding topology. The flooding topology may be a fully connected sub-set of the forwarding topology.
Abstract: A device, connected to a monitoring appliance, may include a traffic analyzer to receive a data unit and identify a traffic flow associated with the data unit. The device may also include a traffic processor to receive the data unit and information regarding the identified traffic flow from the traffic analyzer, determine that the identified traffic flow is to be monitored by the monitoring appliance, change a port number, associated with the data unit, to a particular port number to create a modified data unit when the identified traffic flow is to be monitored by the monitoring appliance, and send the modified data unit to the monitoring appliance.
Abstract: A method performed by a provider edge device includes generating pseudo-wire tables based on virtual private local area network service advertisements from other provider edge devices, where the provider edge device services customer edge devices, and establishing pseudo-wires with respect to the other provider edge devices, based on the pseudo-wire tables, where the pseudo-wires include an active pseudo-wire and at least one standby pseudo-wire with respect to each of the other provider edge devices. The method also includes generating and advertising VPLS advertisement to the other provider edge devices, detecting a communication link failure associated with one of the customer edge devices in which the provider edge device services, and determining whether the at least one standby pseudo-wire needs to be utilized because of the communication link failure.
Abstract: A first network device and a second network device for forwarding data units are included in a network. The second network device is configured to receive data units from the first network device via an output interface from the first network device. Each of the network devices is further configured to form a first value derived from information pertaining to a received data unit, perform a function on the first value to provide a second value, wherein the function of the first network device is different from the function of the second network device when forwarding a same data unit, select an output interface based on the second value, and forward a received data unit via an interface.
Abstract: Techniques are described for providing a kernel with the ability to execute functions from a kernel module during processor initialization and initializing a platform using platform-specific modules. An initialization function of the platform-specific module is executed before a platform-independent phase of the kernel of the operating system is executed. In one example, a device includes a computer-readable medium that stores instructions for a platform-specific module comprising an initialization function, and instructions for an operating system comprising a kernel, wherein the kernel comprises a boot sequence comprising a platform-dependent phase and a platform-independent phase, and a processor to execute instructions stored in the computer-readable medium.
Abstract: Label distribution protocol (LDP) signaled label-switched paths (LSPs) are supported without requiring information about remote autonomous systems (ASs) to be injected into the local interior gateway protocol (IGP). This may be done by (i) decoupling a forwarding equivalency class (FEC) element from the routing information, and (ii) specifying a next hop on which the FEC relies. An LDP messaging structure (e.g., an LDP type-length-value (TLV)) that includes a label, FEC information (e.g., a host address or prefix of an egress LSR of the LSP) and a next hop (e.g., a host address or prefix of a border node, such as an AS border router (ASBR)) may be provided. This messaging structure may be included in one or more of (a) label mapping messages, (b) label withdraw messages, and (c) label release messages.
Type:
Grant
Filed:
November 5, 2003
Date of Patent:
May 8, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Ina Minei, Nischal Sheth, Pedro R. Marques, Yakov Rekhter
Abstract: In one embodiment, a processor-readable medium storing code representing instructions that when executed by a processor cause the processor to store a set of stream signatures representing a set of test streams. The code can be configured to cause the processor to receive at a test device a stream signature from a test packet after the test packet has been processed at a device-under-test. The test packet can emulate at least a portion of network traffic. The code can also be configured to cause the processor to define an indicator representing that the test packet is from a new test stream when the stream signature from the test packet is different than each stream signature from the set of stream signatures. The new test stream is excluded from the set of test streams.
Abstract: A network device is described that concurrently executing more than one instance of an operating system on a single processor. Each of the instances of the operating system executes completely independent of the other instances. In this way, disparate instances may exist for the same operating system or for different operating systems. The techniques allow the processor to concurrently execute, for example, an instance of the operating system may emulate a routing engine and an instance of the operating system may emulate an interface controller. A hyper scheduler performs context switches between the operating systems to enable the processor to concurrently execute the instances of the operating system. The techniques may provide a low cost alternative to employing multiple processors within a network device, such as a router, to execute multiple independent operating systems.