Abstract: A network acceleration device is described that caches resources (e.g., files or other content) to a cache space organized into logically separate views. The views represent an abstraction that defines sets of the cached resources according to the client devices that requested the resources. The network acceleration device associates each view with a specific client device or user, and caches content requested by that specific user to the user's associated view. The network acceleration device herein may achieve a higher level of caching and increased network acceleration over a conventional network acceleration device that maintains a single cache space and shares content among multiple client devices.
Abstract: Virtual Private Networks (VPNs) are supported in which customers may use popular internet gateway protocol (IGPs) without the need to convert such IGPs, running on customer devices to a single protocol, such as the border gateway protocol (BGP). Scaling problems, which might otherwise occur when multiple instances of an IGP flood link state information, are avoided by using a flooding topology which is smaller than a forwarding topology. The flooding topology may be a fully connected sub-set of the forwarding topology.
Abstract: A set of network devices having varying device attributes, such as varying attributes due to different operating system versions, different hardware versions, or different hardware platforms, may be efficiently managed. A syntax file may be used to describe constraints relating to attributes of multiple versions of the network devices. At least one device configuration file (DCF) stores version-based differences relating to the different versions of the network devices, the syntax file and at least one the one DCF collectively describe a set of constraints for the attributes of the network devices.
Type:
Grant
Filed:
November 10, 2004
Date of Patent:
September 7, 2010
Assignee:
Juniper Networks, Inc.
Inventors:
David Lei Zhang, Brian Yean-Shiang Leu, Chi-Chang Lin, Xiangang Huang, James E. Fehrle
Abstract: Packet processing is provided in a multiple processor system including a first processor to processing a packet and to create a tag associated with the packet. The tag includes information about the processing of the packet. A second processor receives the packet subsequent to the first processor and processes the packet using the tag information.
Abstract: A system selectively drops data from queues. The system includes a drop table that stores drop probabilities. The system selects one of the queues to examine and generates an index into the drop table to identify one of the drop probabilities for the examined queue. The system then determines whether to drop data from the examined queue based on the identified drop probability.
Abstract: An ATM (asynchronous transfer mode) cell transfer apparatus includes an input interface, a switch block, and an OAM cell processing hardware block having a memory unit. The input interface receives an SDH/SONET signal on each of a plurality of first transfer paths to output an input OAM cell corresponding to the SDH/SONET signal to one of a plurality of input ports of the switch block corresponding to the first transfer path for the SDH/SONET signal to be transferred. The switch block receives the input OAM (operation and maintenance) cell from the corresponding input port as an OAM input port to output to the OAM cell processing hardware block together with a port number of the OAM input port, and receives at least one output OAM cell from the OAM cell processing hardware block to output to at least one of the plurality of output ports based on the received output OAM cell.
Abstract: In an asynchronous transfer mode switch, a plurality of queues is provided for accumulating transfer cells, and a queue assignment processing section, receives a message for establishing a connection and assigns to the connection one of the queues having a forwarding rate close to a declared rate included in the message and not exceeding the declared rate.
Abstract: Techniques are described for multicast content usage data collection and accounting within a network device. For example, the network device, such as a router, comprises an interface card to receive requests from one or more consumer devices. The requests specify actions concerning multicast content. The requests may include a join request that allows a consumer to join a multicast group and consume content provided by that group, a leave request that allows a user to leave a multicast group and the like. The network device further includes a routing engine to asynchronously collect the requests and create a multicast usage report. The multicast usage report describes multicast content usage by each of the consumer devices. Content providers may access the usage report and derive accounting information from the usage report to update consumer accounts based on the derived accounting information.
Type:
Grant
Filed:
December 23, 2003
Date of Patent:
August 31, 2010
Assignee:
Juniper Networks, Inc.
Inventors:
John C. Scano, David Blease, Eric L. Peterson
Abstract: A hierarchical traffic policer may include a first policer configured to pass first packets when a first condition is met. The first policer also may alter selection information within the passed first packets. A second policer may be configured to pass second packets when a second condition is met. The second policer may be further configured to pass all of the passed first packets from the first policer based on the altered selection information within the passed first packets.
Type:
Grant
Filed:
October 31, 2007
Date of Patent:
August 31, 2010
Assignee:
Juniper Networks, Inc.
Inventors:
James Washburn, Spencer Greene, Rami Rahim, Stefan Dyckerhoff, Dennis C. Ferguson, Philippe Lacroute
Abstract: A device includes a primary control unit and a standby control unit. The standby control unit records routing communications exchanged between the primary control unit and an external routing device in accordance with a routing protocol. A standby routing process executing on the standby control unit processes the recorded routing communications when the primary control unit fails. The standby routing process generates state information for executing the routing protocol on the standby control unit without requiring that routing sessions be reestablished with the external routing device.
Abstract: A router receives destination address information for a packet and determines, among entries in a first forwarding table, a closest match for the received destination address information. The router receives a pointer to a second forwarding table in accordance with the closest match determined in the first forwarding table and determines, among entries in the second forwarding table, a closest match for the received destination address information.
Type:
Grant
Filed:
July 26, 2006
Date of Patent:
August 31, 2010
Assignee:
Juniper Networks, Inc.
Inventors:
Manoj Leelanivas, Ravi Vaidyanathan, Ken Kuwabara, Steven Lin
Abstract: Arbitration is performed in a packet exchanger. In one implementation, a device for performing the arbitration may include input ports configured to each receive sequences that define a packet and output ports. A packet switch concurrently process multiple ones of the received sequences to select an output port for each of the received sequences, the packet switch transferring the received sequences to the selected output ports for output from the device at different times from one another.
Abstract: Methods, apparatus, and products are disclosed for routing frames in a TRILL network using service VLAN identifiers by: receiving a frame from an ingress bridge node for transmission through the TRILL network to a destination node that connects to the TRILL network through an egress node, the received frame including a customer VLAN identifier, a service VLAN identifier uniquely assigned to the ingress bridge node, and a destination node address for the destination node, the received frame not having mac-in-mac encapsulation; adding, in dependence upon the service VLAN identifier and the destination node address, a TRILL header conforming to the TRILL protocol, the TRILL header including an ingress bridge nickname and an egress bridge nickname; and routing, to the egress bridge node through which the destination node connects to the network, the frame in dependence upon the ingress bridge nickname and the egress bridge nickname.
Abstract: The invention is directed toward techniques for Multi-Protocol Label Switching (MPLS) upstream label assignment for the Resource Reservation Protocol with Traffic Engineering (RSVP-TE). The techniques include extensions to the RSVP-TE that enable distribution of upstream assigned labels in Path messages from an upstream router to two or more downstream routers of tunnel established over a network. The tunnel may comprise a RSVP-TE P2MP Label Switched Path (LSP) or an Internet Protocol (IP) multicast tunnel. The techniques also include extensions to the RSVP-TE that enable a router to advertise upstream label assignment capability to neighboring routers in the network. The MPLS upstream label assignment using RSVP-TE described herein enables a branch router to avoid traffic replication on a Local Area Network (LAN) for RSVP-TE P2MP LSPs.
Abstract: A packet header processing engine includes a memory having a number of distinct portions for respectively storing different types of descriptor information for a header of a packet. A packet header processing unit includes a number of pointers corresponding to the number of distinct memory portions. The packet header processing unit is configured to retrieve the different types of descriptor information from the number of distinct memory portions and to generate header information from the different types of descriptor information.
Type:
Application
Filed:
May 4, 2010
Publication date:
August 26, 2010
Applicant:
JUNIPER NETWORKS, INC.
Inventors:
Raymond Marcelino Manese LIM, Jeffrey G. LIBBY
Abstract: Techniques are describe for establishing an overall label switched path (LSP) for load balancing network traffic being sent across a network using the a resource reservation protocol such as Resource Reservation Protocol with Traffic Engineering (RSVP-TE). The techniques include extensions to the RSVP-TE protocol that enable a router to send Path messages for establishing a tunnel that includes a plurality of sub-paths for the overall LSP. The tunnel may comprise a single RSVP-TE Label Switched Path (LSP) that is configured to load balance network traffic across different sub-paths of the RSVP-TE LSP over the network.
Abstract: A method for communicating packet multimedia data between a source endpoint and a destination endpoint is disclosed, wherein at least the source endpoint is within a virtual private network, and comprises the steps of receiving, at a signaling controller, a first signaling packet from the source endpoint, wherein the source endpoint is within a virtual private network; determining whether the source endpoint and destination endpoint may communicate directly over the same virtual private network; when the source endpoint and destination endpoint cannot communicate directly over the same virtual private network, associating a unique identifier of the source endpoint with a virtual private network identification marker; when the source endpoint and destination endpoint can communicate directly over the same virtual private network, instructing the source endpoint and destination endpoint to communicate media packets directly.
Abstract: To provide a method and network system, wherein the proper VPI values are allocated, after the user devices are connected with the network device. A user device transmits a first specific ATM cell, while a network device receives the first specific ATM cell and transmits toward the user device a second specific ATM cell which carries a proper VPI value in the information field of ATM cell. The proper VPI value in the second specific ATM cell is memorized and used by the user device for its own VPI value for communication.
Abstract: A group poll mechanism (GPM) that schedules upstream bandwidth for cable modems by pointing a request opportunity normally reserved for a single service flow to more than one service flow. Essentially, instead of using the seldom-used poll requests one per service flow, this same request opportunity is pointed to multiple service flows. In such kind of a scheme the GPM gives the same mini-slot to multiple service flows. The GPM implements the use of place-holder SIDs and novel mapping of information elements in MAP messages.
Abstract: A gateway for screening packets transferred over a network. The gateway includes a plurality of network interfaces, a memory and a memory controller. Each network interface receives and forwards messages from a network through the gateway. The memory temporarily stores packets received from a network. The memory controller couples each of the network interfaces and is configured to coordinate the transfer of received packets to and from the memory using a memory bus. The gateway includes a firewall engine coupled to the memory bus. The firewall engine is operable to retrieve packets from the memory and screen each packet prior to forwarding a given packet through the gateway and out an appropriate network interface. A local bus is coupled between the firewall engine and the memory providing a second path for retrieving packets from memory when the memory bus is busy.