Abstract: A communication device for use in connecting a mobile station to a terminal includes a first group of frame memories that correspond to radio links used to transmit data to the mobile station. The first group of frame memories is configured to store point-to-point (PPP) frames. The communication device also includes one or more radio converters that convert the PPP frames stored in the first group of frame memories into radio frames. The communication device may further include a second group of frame memories configured to store radio frames received from the mobile station via radio links. The communication device may also include a PPP converter configured to convert the radio frames stored in the second group of frame memories into PPP frames.
Abstract: A packet switching equipment and a switch control system employing the same performs operation of the switch core portion independent of content of decision of an arbiter portion and overall equipment can be constructed with simple control structure. The packet switching equipment includes input buffer portions temporarily storing packets arriving to the input ports and outputting packets with adding labels indicative of destination port numbers, a switch core portion for switching the packets on the basis of labels added to the input buffer portions, and an arbiter portion adjusting input buffer portions to provide output permissions for outputting to the output ports. A sorting network autonomously sorting and concentrating the packets on the basis of the labels added to the packets is employed in the switch core portion.
Abstract: A packet processing method for exchanging packet data through a plurality of layers is disclosed, that comprises the steps of storing the entire packet to a packet memory; and storing part of each packet of the packet data used in processes of a layer 2 processing portion and a layer 3 processing portion of the plurality of layers to a multi-port shared memory, the layer 2 processing portion and the layer 3 processing portion accessing the same memory space of the multi-port shared memory. In addition, a pipeline processing system is used so that when the layer 2 processing portion and the layer 3 processing portion access the shared memory, they do not interfere with each other.
Abstract: A system maintains a first counter value that indicates a number of times memory addresses in a memory address pool have been replenished. The system further maintains a second counter value that indicates a number of times a circular buffer has been filled with memory addresses retrieved from the memory address pool. The system ages memory addresses allocated to memory write requests based on the first and second counter values.
Abstract: Processing of numeric addresses is facilitated by using a user interface, rather than system modules, to handle name resolution. Processing the addresses at the user interface level avoids delays and packet blocking problems associated with using system modules to perform the task. Relieving the system modules from the responsibility of processing numeric addresses allows them to process other requests, improving overall system efficiency.
Type:
Grant
Filed:
September 25, 2001
Date of Patent:
January 30, 2007
Assignee:
Juniper Networks, Inc.
Inventors:
Reid Evan Wilson, Philip Austin Shafer, Robert P Enns
Abstract: A cross-bar switch includes a set of input ports to accept data packets and a set of sink ports in communication with the input ports to forward the data packets. Each sink port includes a communications link interface with a Retry input. When a signal is asserted on the Retry input, the sink port aborts transmission of a data packet and waits a predetermined period of time to retransmit the data packet.
Type:
Grant
Filed:
December 21, 2001
Date of Patent:
January 30, 2007
Assignee:
Juniper Networks, Inc.
Inventors:
Abbas Rashid, Nazar Zaidi, Mark Bryers, Fred Gruner
Abstract: The present invention teaches a compact and highly integrated multiple-channel digital tuner and receiver architecture, suitable for widespread field deployment, wherein each receiver demodulator channel may be remotely, automatically, dynamically, and economically configured for a particular cable, carrier frequency, and signaling baud-rate, from an option universe that includes a plurality of input cables, a plurality of carrier frequencies, and a plurality of available baud-rates. A multiple coax input, multiple channel output, digital tuner is partitioned into a multiple coax input digitizer portion and a multiple channel output front-end portion. The digitizer portion consists of N digitizers and accepts input signals from N coax cables and digitizes them with respective A/D converters. The front-end portion consists of M front-ends and provides M channel outputs suitable for subsequent processing by M respective digital demodulators.
Abstract: A system detects an error in a network device that receives data via a group of data streams. The system receives a data unit, where the data unit is associated with at least one of the streams and a sequence number for each of the associated streams. The system determines whether each sequence number associated with the data unit is a next sequence number for the corresponding stream, and detects an error for a particular stream when the sequence number for that stream is not a next sequence number.
Abstract: A data compression system and method for that is capable of detecting and eliminating repeated phrases of variable length within a window of virtually unlimited size.
Abstract: A method and apparatus for scheduling virtual upstream channels within one physical upstream channel is disclosed. A different MAP message is received by a receiver for each virtual upstream channel from that sent downstream. Where multiple upstream receivers are used, separate MAP messages can be sent for each receiver and consequently, each virtual upstream channel. The use of multiple upstream receivers is not necessary if the upstream receiver can change the upstream channel descriptors it is using per burst.
Abstract: Systems and methods, consistent with the present invention, provide a high-speed line interface for networking devices. Such an interface may be used in networking devices, such as routers and switches, for receiving data from, and transmitting data to, high-speed links, such as those lines carrying data at rates of 2.5 Gbit/sec, 10 Gbit/sec, and 40 Gbit/sec and more. In a preferred embodiment, the interface deserializes data from an incoming data stream onto a multi-line bus so that the data may be processed at a lower clock speed. Packets are extracted from the data on the multi-line bus and distributed among a plurality of switching/forwarding modules for processing.
Type:
Grant
Filed:
January 3, 2001
Date of Patent:
January 16, 2007
Assignee:
Juniper Networks, Inc.
Inventors:
Ashok Krishnamurthi, Jeffrey Scott Dredge, Ramesh Padmanabhan, Ramalingam K. Anand
Abstract: A scheduler allowing high-speed scheduling scalable with the number of input and output ports of a crosspoint switch and suppressed unfairness among inputs is disclosed. The scheduler includes an M×M matrix of scheduling modules, each of which schedules packet forwarding connections from a corresponding input group of input ports to selected ones of a corresponding output group of output ports based on reservation information. A diagonal module pattern is used to determine a set or M scheduling modules to avoid coming into collision with each other. Each determined scheduling module performs reservation of packet forwarding connections based on current reservation information and transfers updated reservation information in row and column directions of the M×M matrix.
Abstract: A device and method are disclosed for correctly restoring a read clock when there are a plurality of STM data stream transmission sources. In a CES device of an ATM communication system, ATM cells from respective connections, which are to be delivered to the same outgoing line, are accumulated in a reassembly buffer memory and a PLO control unit aggregates the amount of ATM cells accumulated in the reassembly buffer memory for each connection. Subsequently, the PLO control unit calculates the frequency of a read clock based on the amount of accumulated ATM cells for each connection. A PLO restores the read clock which is applied to read data from the reassembly buffer memory for delivery to an STM network.
Abstract: A packet header processing engine includes a level 2 (L2) header generation unit and a level 3 (L3) header generation unit. The L2 and L3 header generation units are implemented in parallel with one another. Mailbox registers allow the L2 and L3 header generation units to communicate with one another. The L2 header generation unit may write to a specified mailbox register only when a valid bit corresponding to the mailbox register indicates that the register does not contain valid data. After writing to the mailbox register, the L2 header generation unit changes the state of the valid bit. The L3 register then reads from the mailbox register and changes the state of the valid bit. A similar implementation of the mailbox registers allows data to flow from the L3 header generation unit to the L2 header generation unit.
Type:
Grant
Filed:
March 22, 2002
Date of Patent:
January 2, 2007
Assignee:
Juniper Networks, Inc.
Inventors:
Pradeep Sindhu, Raymond M. Lim, Jeffrey G. Libby
Abstract: A system and method that optimizes transmission control protocol (TCP) initial session establishment without intruding upon TCP's core algorithms. TCP's initially session establishment is accelerated by locally processing a source's initial TCP request within the source's local area network (LAN). A control module relatively near the source's local area network (LAN) and another control module relatively near a destination's LAN are utilized to complete the initial TCP session establishment within the source and the destination's respective LANs, thereby substantially eliminating the first round-trip time delay before the actual data flow begins. The first application-layer data packet thus can be transmitted at substantially the same time as the initial TCP request.
Type:
Grant
Filed:
March 7, 2006
Date of Patent:
January 2, 2007
Assignee:
Juniper Networks, Inc.
Inventors:
Balraj Singh, Amit P. Singh, Vern Paxson
Abstract: Burst detection with high accuracy is achieved with a single autocorrelation circuit. Autocorrelation is performed using a preamble-embedded correlation sequence chosen such that a steeply sloped peak characterizes the autocorrelation response. The autocorrelation circuit is preferably used multiple times per clock period to deliver correlation moduli at sub-clock multiples. A contrast function makes a weighted comparison of each correlation modulus output by the autocorrelation circuit relative to adjacent correlation moduli. The contrast output defines a burst start-time uncertainty-window in a manner that is independent of signal level variability attributable to different operating conditions. A search is performed within the uncertainty window to identify the correlation maximum. Depending on the system mode (e.g., traffic mode vs. ranging mode), a priori knowledge may be preferred to the contrast output for defining the timing uncertainty window of the search.
Abstract: A multiplexing apparatus selectively performs cell discard processing in the case of congestion on the basis of a use state of the same connection formed by cells from the side of a switching unit and subscribers without installing UPC units, and the multiplexing apparatus, which is connected to the switching unit and each of the plural subscribers through communication lines.
Abstract: Network address translation (NAT) translates between globally unique addresses used within a global network and a local network. A method, for example, includes mapping a first set of globally non-routable global addresses to a second set of globally routable global addresses, and forwarding packets in accordance with the mapping. The method may further include assigning the first set of addresses to devices of a local network, and forwarding packets between the devices of the local network and a global network. These techniques may significantly reduce the demand placed on routing devices in a global network.
Abstract: A packet switching system capable of ensuring the sequence and continuity of packets and further compensating for delays in transmission is disclosed. Each of two redundant switch sections has a high-priority queue and a low-priority queue for each of output ports. A high-priority output selector selects one of two high-priority queues corresponding to respective ones of the two switch sections to store an output of the selected one into a high-priority output queue. A low-priority output selector selects one of two low-priority queues corresponding to respective ones of the two switch sections to store an output of the selected one into a low-priority output queue. The high-priority and low-priority output selectors are controlled depending on a system switching signal and a packet storing status of each of the high-priority and low-priority queues.
Abstract: A call admission control technique allowing flexible and reliable call admissions at an ATM switch in the case of an ATM network including both QoS-specified and QoS-unspecified virtual connections is disclosed. In the case where a QoS (Quality of Service) specified connection request occurs, an estimated bandwidth is calculated which is to be assigned to an existing QoS-unspecified traffic on the link associated with the QoS-specified connection request. A call control processor of the ATM switch determines whether the QoS-specified connection request is accepted, depending on whether a requested bandwidth is smaller than an available bandwidth that is obtained by subtracting an assigned bandwidth and the estimated bandwidth from a full bandwidth of the link.