Abstract: In one embodiment, an apparatus can include a filter module configured to receive multiple Media Access Control (MAC) addresses associated with multiple virtual ports instantiated at a first network device. Each virtual port from the multiple virtual ports can be associated with a MAC address from the multiple MAC addresses. The filter module can be configured to define a filter to be applied to a data frame sent between the first network device and a network switch, the filter being based at least in part on a MAC address prefix included in each MAC address from the plurality of MAC addresses. The MAC address prefix can include an identifier uniquely associated with a second network device at which the filter module operates.
Abstract: In some embodiments, an apparatus includes a gateway device configured to be operatively coupled to a Fibre Channel switch by a first data port and a second data port. The gateway device is configured to designate the first data port as a primary data port and the second data port as a secondary data port. The gateway device is configured to associate a set of virtual ports with the first data port and not the second data port when in the first configuration. The gateway device is configured to associate the set of virtual ports with the second data port when in the second configuration. The gateway device moves from the first configuration to the second configuration when an error associated with the first data port is detected.
Abstract: In one embodiment, an apparatus includes a switching policy module configured to define a switching policy associating a Fibre Channel port with a destination Media Access Control (MAC) address. The switching module can be configured to receive a Fibre Channel over Ethernet (FCoE) frame from a network device and send a Fibre Channel frame encapsulated in the FCoE frame to the Fibre Channel port based at least in part on the switching policy and a destination MAC address of the FCoE frame.
Abstract: In general, techniques are described for dynamically managing weighted queues. In accordance with the techniques, a network security device comprises a queue management module that assigns, for each queue of a plurality of queues, a quota desirable to a user that a processor of the network security device consumes to service each queue. The queue management module determines, based on the desirable quotas, a queue weight for each queue and computes. Based on the computation, the queue management module dynamically adjusts one or more of the weights such that subsequent amounts of processing time actually required to process the number of packets defined by each of the queue weights more accurately reflects the desirable quotas assigned to each of the queues. The network device outputs the number of packets in accordance with the adjusted weights.
Type:
Grant
Filed:
April 30, 2008
Date of Patent:
June 26, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Dongyi Jiang, Chih-Wei Chao, David Yu, Jin Shang
Abstract: A system provides a set of services. The system includes nodes that are in communication with each other. The system segregates the services into at least first and second groups of services, assigns the first group of services to a first set of the nodes, and assigns the second group of services to a second set of nodes. The first set of nodes provides the first group of services, and the second set of nodes provides the second group of services.
Abstract: A method performed by a provider edge device in a multi-autonomous system (AS) includes receiving advertisements from other PEs of the multi-AS, where one or more of the advertisements includes a destination AS parameter that indicates a destination AS of the multi-AS; generating pseudo-wire (PW) tables based on the advertisements received from the other PEs; and establishing PWs with respect to the other PEs based on the PW tables.
Abstract: A network optimization device may receive a stream of data and generate a signature for a plurality of fixed length overlapping windows of the stream of data. The device may select a predetermined number of the generated signatures for each Ln-byte segment of the data stream, wherein Ln is greater than a length of each of the windows. The network device may store the selected signatures in a bucketed hash table that includes a linked-list of entries for each bucket.
Abstract: In general, the invention is directed to techniques for identifying memory overruns. For example, as described herein, a device includes a main memory that enables an addressable memory space for the device. A plurality of memory pages each comprises a separate, contiguous block of addressable memory locations within the addressable memory space. The device also includes a memory manager comprising a secure pool allocator that assigns a secure pool size value to a first one of the plurality of memory pages. The secure pool size value defines a plurality of protected memory spaces in the first memory page that partition the first memory page into a plurality of secure objects. The device also includes a memory management unit comprising secure pool logic that determines, based on the secure pool size value, whether a memory address is an address of one of the protected memory spaces in the first memory page.
Type:
Grant
Filed:
January 13, 2010
Date of Patent:
June 26, 2012
Assignee:
Juniper Networks, Inc.
Inventors:
Timothy Noel Thathapudi, Srinivasa Dharwad Satyanarayana, Siddharth Arun Tuli
Abstract: An intrusion detection and prevention (IDP) device includes an attack detection module and a forwarding component. The attack detection module applies a compound attack definition to a packet flow of a computer network to determine whether the packet flow includes at least one pattern and at least one protocol anomaly. The forwarding component selectively discards the packet flow based on the determination. The IDP device may further include a reassembly module to form application-layer communications from the packet flows, and a plurality of protocol-specific decoders to process the application-layer communications to extract application-layer elements and detect protocol anomalies.
Abstract: A data prefetching technique uses predefined prefetching criteria and prefetching models to identify and retrieve prefetched data. A prefetching model that defines data to be prefetched via a network may be stored. It may be determined whether prefetching initiation criteria have been satisfied. Data for prefetching may be identified based on the prefetching model when the prefetching initiation criteria have been satisfied. The identified data may be prefetched, via the network, based on the prefetching model.
Abstract: In some embodiments, an apparatus includes a network management module. The network management module is configured to send a request for power output data from a first network element having a first power supply configured to be coupled to a first power outlet, and a second power supply configured to be coupled to a second power outlet. The network management module is configured to receive a first confirmation from the first network element that the first power supply and the second power supply are receiving power. The network management module is configured to send a request to disable a third power outlet and to receive, after sending the request to disable the third power outlet, a second confirmation from the first network element that the first power supply and the second power supply are receiving power. The network management module is configured to define a power distribution table after receiving the second confirmation, the power distribution table designating the third power outlet as unused.
Type:
Application
Filed:
December 15, 2010
Publication date:
June 21, 2012
Applicant:
JUNIPER NETWORKS, INC.
Inventors:
Ashley SAULSBURY, Michael O'GORMAN, Gunes AYBAY
Abstract: A method may include receiving a packet in a network device, selecting one of a group of ingress buffers, where each ingress buffer is associated with a different one of a group of processors, distributing the packet to the selected ingress buffer; and scheduling the packet, based on a congestion state of a queue in an egress buffer associated with the packet, to be processed by the processor associated with the selected ingress buffer to provide a network service
Abstract: In some embodiments, a system includes a first network control entity, a second network control entity and a third network control entity. The first network control entity and the second network control entity are associated with a first network segment. The third network control entity is associated with a second network segment. The first network control entity is operable to send to the second network control entity an identifier of the first network segment and forwarding-state information associated with a data port at a first network element. The second network control entity is operable to receive the identifier of the first network segment and the forwarding-state information. The second network control entity is operable to send the forwarding-state information to a second network element. The first network control entity does not send the identifier of the first network segment and the forwarding-state information to the third network control entity.
Abstract: In some embodiments, an apparatus includes a compute device to communicate with a network control entity at each access switch from a set of access switches that define a portion of a data plane having a switch fabric coupling as hierarchical peers each access switch from the set of access switches. The compute device is operable to define a portion of a control plane that includes the network control entities from the set of access switches such that the compute device is hierarchically removed from the network control entities from the set of access switches. The compute device is operable to receive forwarding-state information from a first access switch from the set of access switches. The compute device to send the forwarding-state information to a second access switch from the set of access switches.
Abstract: In one embodiment, an apparatus includes a shared memory buffer including a lead memory bank and a write multiplexing module configured to send a leading segment from a set of segments to the lead memory bank. The set of segments includes bit values from a set of variable-sized cells. The write multiplexing module further configured to send each segment from the set of segments identified as a trailing segment to a portion of the shared memory mutually exclusive from the lead memory bank.
Abstract: An integrated, multi-service network client for cellular mobile devices is described. The multi-service network client can be deployed as a single software package on cellular mobile network devices to provide integrated services including secure enterprise virtual private network (VPN) connectivity, acceleration, security management including monitored and enforced endpoint compliance, and collaboration services. Once installed on the cellular mobile device, the multi-service client integrates with an operating system of the device to provide a single entry point for user authentication for secure enterprise connectivity, endpoint security services including endpoint compliance with respect to anti-virus and spyware software, and comprehensive integrity checks.
Type:
Application
Filed:
February 23, 2012
Publication date:
June 21, 2012
Applicant:
JUNIPER NETWORKS, INC.
Inventors:
Vikki Yin Wei, Subramanian Iyer, Richard Campagna, James Wood
Abstract: In some embodiments, an apparatus implemented in a memory and/or a processing device includes a first network control entity to manage a first data plane module associated with a port from a set of ports at a first access switch. The first network control entity associates an identifier of a peripheral processing device operatively coupled to the port from the set of ports with a next hop reference. The first network control entity provides the next hop reference to a second network control entity that manages a second data plane module at a second access switch such that the second data plane module can append the next hop reference to a data packet when the peripheral processing device is within a data path between and including the second access switch and a destination peripheral processing device.
Type:
Application
Filed:
December 15, 2010
Publication date:
June 21, 2012
Applicant:
Juniper Networks, Inc.
Inventors:
Vijayabhaskar Annamalai Kalusivalingam, Quaizar Vohra, Ravi Shekhar, Jaihari Loganathan
Abstract: In some embodiments, a switch fabric system includes multiple access switches configured to be operatively coupled to a switch fabric. The multiple access switches include multiple ports each to be operatively coupled to a peripheral processing device. A first set of ports from the multiple ports and a second set of ports from the multiple ports are managed by a first network control entity when the switch fabric system is in a first configuration. The first set of ports is managed by the first network control entity and the second set of ports is managed by a second network control entity when the switch fabric system is in a second configuration. The second network control entity is automatically initiated when the system is changed from the first configuration to the second configuration.
Abstract: A device may select a longest run of contiguous unwritten pages from multiple runs of contiguous unwritten pages provided in a ternary content addressable memory, and may write a rule on a page that is located at a middle portion of the longest run to create two runs of contiguous unwritten pages. The device may also receive a packet, and may apply the rule to the packet.
Abstract: A retention-extraction device is provided for a removable card in a chassis. The device includes an actuation rod having a cam slot, the actuation rod configured to provide linear movement along the length of the actuation rod, and an extraction lever operatively connected to a proximal end of the actuation rod and pivotally secured to the chassis. The device also includes a bell crank with a cam follower that is configured to ride in the cam slot and a latch hook that pivots between an open and closed position based on the motion of the bell crank. The linear movement of the actuation rod causes the extraction lever to apply a force to a portion of the card and causes the latch hook to pivot to an open position to allow removal of the card.