Abstract: In one embodiment, a method includes classifying a data packet received at a switch fabric, selecting an action descriptor in response to the classifying, and processing an action defined in the action descriptor. The classifying is based on a primary classification condition and first portion of the data packet. The action descriptor is associated with the primary classification condition. The processing includes determining whether a secondary classification condition is satisfied by a second portion of the data packet.
Abstract: An intrusion detection system is described that is capable of applying a plurality of stacked (layered) application-layer decoders to extract encapsulated application-layer data from a tunneled packet flow produced by multiple applications operating at the application layer, or layer seven (L7), of a network stack. In this was, the IDS is capable of performing application identification and decoding even when one or more software applications utilize other software applications as for data transport to produce packet flow from a network device. The protocol decoders may be dynamically swapped, reused and stacked (layered) when applied to a given packet or packet flow.
Abstract: Principles of the invention are described for providing virtual private local area network service (VPLS) multicast instances across a public network by utilizing multicast trees. In particular, the VPLS multicast instances transport layer two (L2) multicast traffic, such as Ethernet packets, between customer networks via the public network. The principles described herein enable VPLS multicast instances to handle high bandwidth multicast traffic. The principles also reduce the state and the overhead of maintaining the state in the network by removing the need to perform snooping between routers within the network.
Abstract: Ordering logic ensures that data items being processed by a number of parallel processing units are unloaded from the processing units in the original per-flow order that the data items were loaded into the parallel processing units. The ordering logic includes a pointer memory, a tail vector, and a head vector. Through these three elements, the ordering logic keeps track of a number of “virtual queues” corresponding to the data flows. A round robin arbiter unloads data items from the processing units only when a data item is at the head of its virtual queue.
Type:
Application
Filed:
September 30, 2011
Publication date:
February 2, 2012
Applicant:
JUNIPER NETWORKS, INC.
Inventors:
Dennis C. FERGUSON, Philippe LACROUTE, Chi-Chung CHEN, Gerald CHEUNG, Tatao CHUANG, Pankaj PATEL, Viswesh ANANTHAKRISHNAN
Abstract: A gateway may be used as a common entry point for a network. Subscribers may request network services through the gateway. The gateway may identify management entities that are appropriate for a particular subscriber's request by contacting a network information collector (NIC). The NIC may include one or more, possibly distributed, resolver components and information collection agents. The resolvers are responsible for the resolution process, which may be based on a resolution process that specifies resolution functions that are required to identify the management entities. The information collection agents may be customizable software agents that collect state information from other elements in the network.
Type:
Grant
Filed:
November 6, 2008
Date of Patent:
January 31, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Wladimir de Lara Araujo Filho, Sherine El-Medani, Martin Bokaemper
Abstract: A media gateway is coupled to an Internet Protocol (IP) network through a router. The router and the media gateway communicate through a slim protocol that allows the media gateway to reserve connections over the IP network that have certain minimum bandwidth and latency attributes. The router handles the obtaining of the requested IP circuit for the client. The media gateways only need to execute a relatively simple client application and do not have to be independently capable of obtaining IP-QoS information from the IP network.
Abstract: A controller may receive a request from an endpoint and determine whether the endpoint connects via a first network or a second network. The controller may download first software to the endpoint when the endpoint connects via the first network, where the first software facilitates authentication of the endpoint via another device and instructs the endpoint to not store information regarding the controller. The controller may download second software to the endpoint when the endpoint connects via the second network, where the second software facilitates authentication of the endpoint by the device and instructs the endpoint to store information regarding the controller.
Abstract: A system schedules traffic flows on an output port using a circular memory structure. The circular memory structure may be a rate wheel that includes a group of sequentially arranged slots. The rate wheel schedules the traffic flows in select ones of the slots based on traffic shaping parameters assigned to the flows. The rate wheel compensates for collisions between multiple flows that occur in the slots by subsequently skipping empty slots.
Abstract: An example network system includes a plurality of endpoint computing resources, a business policy graph of a network that includes a set of the plurality of endpoint computing resources configured as a security domain, a set of policy enforcement points (“PEPs”) configured to enforce network policies, and a network management module (“NMM”). The NMM is configured to receive an indication of a set of network policies to apply to the security domain, automatically determine a subset of PEPs of the set of PEPs are required to enforce the set of network policies based on physical network topology information readable by the NMM that includes information about the location of the endpoint computing resources and the set of PEPs within the network, and apply the network policies to the subset of PEPs in order to enforce the network policies against the set of endpoint computing resources of the security domain.
Type:
Application
Filed:
July 22, 2010
Publication date:
January 26, 2012
Applicant:
JUNIPER NETWORKS, INC.
Inventors:
Anoop V. Kartha, Kamil Imtiaz, Ahzam Ali, Amarnath Bachhu Satyan, Firdousi Zackariya, Nadeem Khan, Sanjay Agarwal
Abstract: In general, techniques are described for hardware-based detection and automatic restoration of a computing device from a compromised state. Moreover, the techniques provide for automatic, hardware-based restoration of selective software components from a trusted repository. The hardware-based detection and automatic restoration techniques may be integrated within a boot sequence of a computing device so as to efficiently and cleanly replace only any infected software component.
Abstract: To address shortcomings in the prior art, the invention uses fate sharing information to compute backup paths. Fate sharing information relates groups of nodes or links according to common characteristics, attributes, or shared resources (e.g., a shared power supply, close proximity, same physical link). In one embodiment, fate-sharing information includes costs associated with groups of nodes or links. When a primary path contains a link or node that is in a fate-sharing group, the other links or nodes in the fate-sharing group are assigned the cost associated with that fate-sharing group. The node computing the backup path takes into account the assigned cost together with other node and link costs. Discovering the existence of the relationships and assigning costs to the groups may be done manually or automatically.
Abstract: A device, connected to a monitoring appliance, may include a traffic analyzer to receive a data unit and identify a traffic flow associated with the data unit. The device may also include a traffic processor to receive the data unit and information regarding the identified traffic flow from the traffic analyzer, determine that the identified traffic flow is to be monitored by the monitoring appliance, change a port number, associated with the data unit, to a particular port number to create a modified data unit when the identified traffic flow is to be monitored by the monitoring appliance, and send the modified data unit to the monitoring appliance.
Abstract: In general, techniques are described for securely exchanging network access control information. The techniques may be useful in situations where an endpoint device and an access control device perform a tightly-constrained handshake sequence of a network protocol when the endpoint device requests access to a network. The handshake sequence may be constrained in a variety of ways. Due to the constraints of the handshake sequence, the endpoint device and the access control device may be unable to negotiate a set of nonce information during the handshake sequence. For this reason, the access control device uses a previously negotiated set of nonce information and other configuration information associated with the endpoint device as part of a process to determine whether the endpoint device should be allowed to access the protected networks.
Abstract: A pipelined reorder engine reorders data items received over a network on a per-source basis. Context memories correspond to each of the possible sources. The pipeline includes a plurality of pipeline stages that together simultaneously operate on the data items. The context memories are operatively coupled to the pipeline stages and store information relating to a state of reordering for each of the sources. The pipeline stages read from and update the context memories based on the source of the data item being processed.
Type:
Grant
Filed:
July 8, 2009
Date of Patent:
January 24, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Rami Rahim, Venkateswarlu Talapaneni, Philippe G Lacroute
Abstract: An alternating current (AC) power cord retainer is configured to be incorporated into or connected to a power cord, instead of the electronic device to which the cord may be connected. The power cord retainer is configured to be received within and engage the same receptacle within which the plug of the power cord is received.
Abstract: A method of setting a path in a network using an Internet protocol includes determining whether a first label switching path having an adequate band for transferring a packet between two label switching routers exists. The method also includes setting a new label switching path when it is determined that the first label switching path does not exist.
Abstract: A system schedules traffic flows on an output port using circular memory structures. The circular memory structures may include rate wheels that include a group of sequentially arranged slots. The traffic flows may be assigned to different rate wheels on a per-priority basis.
Abstract: A system includes a queue that stores P data units, each data unit including multiple bytes. The system further includes a control unit that shifts, byte by byte, Q data units from the queue during a first system clock cycle, where Q<P, and sends, during the first system clock cycle, the Q data units to a processing device configured to process a maximum of Q data units per system clock cycle.
Abstract: A key engine that performs route lookups for a plurality of keys may include a data processing portion configured to process one data item at a time and to request data when needed. A buffer may be configured to store a partial result from the data processing portion. A controller may be configured to load the partial result from the data processing portion into the buffer. The controller also may be configured to input another data item into the data processing portion for processing while requested data is obtained for a prior data item. A number of these key engines may be used by a routing unit to perform a large number of route lookups at the same time.
Abstract: An xDSL accommodation apparatus includes an xDSL interface, exchange switch, and cache server. The xDSL interface interfaces an xDSL (any types of Digital Subscriber Line) to which each of a plurality of clients is connected. The exchange switch exchanges a packet transmitted/received between a content server and a client. The cache server temporarily stores a content received from the content server through a network. The cache server includes a copy/distribution section. The copy/distribution section copies and distributes the stored content to distribute the same content to the plurality of clients. A multicast distribution system and data distribution method are also disclose.