Contention Resolution For Output Patents (Class 370/414)
  • Patent number: 7571017
    Abstract: An Intelligent Data Multiplexer (“IDM”) can be used to interface a plurality of host computers with a semiconductor manufacturing tool. In one embodiment, the IDM has a plurality of host-side ports configured to receive messages from each of the host computers interfaced with the semiconductor manufacturing tool. The IDM also includes a multiplexer configured to multiplex the messages received from the host-side ports. To process possible conflict messages from the host computers, the IDM has a conflict resolve module. The messages are then delivered from the IDM to the semiconductor manufacturing tool through a tool-side port used to connect the semiconductor manufacturing tool to the IDM.
    Type: Grant
    Filed: November 7, 2003
    Date of Patent: August 4, 2009
    Assignee: Applied Materials, Inc.
    Inventor: Alexey G. Goder
  • Patent number: 7558197
    Abstract: A system provides congestion control in a network device. The system includes multiple queues, a dequeue engine, a drop engine, and an arbiter. The queues temporarily store data. The dequeue engine selects a first one of the queues and dequeues data from the first queue. The drop engine selects a second one of the queues to examine and selectively drop data from the second queue. The arbiter controls selection of the queues by the dequeue engine and the drop engine.
    Type: Grant
    Filed: July 30, 2002
    Date of Patent: July 7, 2009
    Assignee: Juniper Networks, Inc.
    Inventors: Pradeep Sindhu, Debashis Basu, Jayabharat Boddu, Avanindra Godbole
  • Patent number: 7545808
    Abstract: A network device switches variable length data units from a source to a destination in a network. An input port receives the variable length data unit and a divider divides the variable length data unit into uniform length data units for temporary storage in the network device. A distributed memory includes a plurality of physically separated memory banks addressable using a single virtual address space and an input switch streams the uniform length data units across the memory banks based on the virtual address space. The network device further includes an output switch for extracting the uniform length data units from the distributed memory by using addresses of the uniform length data units within the virtual address space. The output switch reassembles the uniform length data units to reconstruct the variable length data unit. An output port receives the variable length data unit and transfers the variable length data unit to the destination.
    Type: Grant
    Filed: September 15, 2005
    Date of Patent: June 9, 2009
    Assignee: Juniper Networks, Inc.
    Inventors: Pradeep S. Sindhu, Dennis C. Ferguson, Bjorn O. Liencres, Nalini Agarwal, Hann-Hwan Ju, Raymond Marcelino Manese Lim, Rasoul Mirzazadeh Oskouy, Sreeram Veeragandham
  • Patent number: 7545745
    Abstract: A terminal adapter for guaranteeing the quality of service of both voice and data packets is disclosed. Such quality is ensured by inserting gaps between successive data packets in a stream of multiplexed data and/or voice packets. A gap after a particular data packet is proportional to the size of that particular data packet. In this way, bandwidth is preserved for any voice packets that may have arrived during the transfer of the data packet as well as for any voice packets that arrive during the gap. The unconstrained upstream data bandwidth and the bandwidth used by voice calls may each be estimated by taking a plurality of instantaneous measurements of the available bandwidth and/or taking individual direct measurements. The size of data packets may be limited to a maximum size in order to ensure that time-sensitive voice packets experience only an acceptable delay in queue for transmission.
    Type: Grant
    Filed: January 14, 2005
    Date of Patent: June 9, 2009
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Ali M. Cherchali, Gagan Lal Choudhury, Gerald Murray Ezrol, Marius Jonas Gudelis, Thomas Joseph Killian, Jerry A. Leger, Norman L. Schryer
  • Patent number: 7525978
    Abstract: A system and method that can be deployed to schedule links in a switch fabric. The operation uses two functional elements: to perform updating of a priority link list; and then selecting a link using that list.
    Type: Grant
    Filed: April 15, 2005
    Date of Patent: April 28, 2009
    Assignee: Altera Corporation
    Inventors: Vahid Tabatabaee, Son Truong Ngo
  • Patent number: 7522527
    Abstract: Disclosed is a configuration method and device for a two-dimensional expandable crossbar matrix switch for application to tera-level high-speed and large-capacity switches. The present invention includes N input ports, N output ports, and an N×N matrix switch for transmitting cells between the input and output ports. Each input port includes N VOQs which are sequentially combined by n VOQs to configure respective L VOQ groups. The L VOQ groups are connected to L XSUs through independent interface ports. Therefore, each input port can transmit a maximum of L cells to the matrix switch during one cell time slot.
    Type: Grant
    Filed: September 15, 2003
    Date of Patent: April 21, 2009
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jong-Arm Jun, Sung-Hyuk Byun, Byung-Jun Ahn, Seung-Yeob Nam, Dan-Keun Sung
  • Patent number: 7515586
    Abstract: Switch fabrics (10) comprise first stages (11) for receiving multicast input signals (A-C) and second stages (12) for in response to the input signals generating output signals. The switch fabrics (10) are coupled to detectors (31) for detecting parameters indicating conditions of the second stages (12) per output signal and for generating detection results per output signal, and are coupled to controllers (21) for, in response to the detection results, controlling the second stages (12) per output signal. Such switch fabrics (10) handle output congestion in a more individual way. In case of one part of the second stage (12) being congested, the copying of the multicast input signals (A-C) into output signals and their internal transmission no longer need to be ceased. Only one of the output signals corresponding with the congested part of the second stage (12) cannot be delivered.
    Type: Grant
    Filed: June 21, 2005
    Date of Patent: April 7, 2009
    Assignee: ALCATEL
    Inventor: Bart Joseph Gerard Pauwels
  • Publication number: 20090067428
    Abstract: Disclosed are methods, apparatus and computer program products, in accordance with exemplary embodiments of this invention, that provide a communication network having an enhanced data packet source routing procedure, that provide enhanced QoS functionality in a communication network where a first network protocol layer implements QoS with relative guaranties and best effort, while an underlying layer provides physical resource distribution between data pipes with absolute QoS guaranties, and that provide provides resource reservation, management and releasing for per flow resource management with strict/hard QoS guaranties. The communication network may employ optical and/or electrical data paths.
    Type: Application
    Filed: June 7, 2006
    Publication date: March 12, 2009
    Inventors: Sergey Balandin, Michel Gillet
  • Patent number: 7492781
    Abstract: The object of the invention is to create a router which has an enhanced processing speed. According to the invention, before access through a readout unit, the pointers for information packets stored in the buffer memory are arranged as required. If an overflow is imminent in a buffer memory area, for example, then individual pointers are selected and removed from the buffer memory area. The selected pointers are shifted into an additional buffer memory area, for example. This additional buffer memory area is then preferentially read out, so that the selected pointers are read out before the pointers in the buffer memory area. The criterion for the selection of a pointer is, for example, an expired reactivation time or a buffer memory area that is filled above a threshold value.
    Type: Grant
    Filed: August 27, 2002
    Date of Patent: February 17, 2009
    Assignee: Alcatel
    Inventor: Ralf Klotsche
  • Patent number: 7453898
    Abstract: Methods and apparatus are disclosed for simultaneously scheduling multiple priorities of packets, such as in systems having a non-blocking switching fabric. In one implementation, the maximum bandwidth which a particular input can send is identified. During a scheduling cycle, a current bandwidth desired for a first priority of traffic is identified, which leaves the remaining bandwidth available for a second priority of traffic without affecting the bandwidth allocated to the first priority of traffic. By determining these bandwidth amounts at each iteration of a scheduling cycle, multiple priorities of traffic can be simultaneously scheduled. This approach may be used by a wide variety of scheduling approaches, such as, but not limited to using a SLIP algorithm or variant thereof. When used in conjunction with a SLIP algorithm, the current desired bandwidths typically correspond to high and low priority requests.
    Type: Grant
    Filed: January 9, 2003
    Date of Patent: November 18, 2008
    Assignee: Cisco Technology, Inc.
    Inventors: Earl T. Cohen, Flavio Giovanni Bonomi
  • Patent number: 7447204
    Abstract: A system and method for classification of data units in a network device acts as a bridge in heterogeneous networks, provides many different services and provisions many different transport mechanisms. The data classifier generates an ID that is internally used by the network device in managing, queuing, processing, scheduling and routing to egress the data unit. This internal ID enables the device to accept any type of data units from any physical/logical ports or channels and output those data units on any physical/logical ports or channels that are available. The device utilizes learning on a per-flow basis and can enable the device to identify and process data units used in private line services and private LAN services.
    Type: Grant
    Filed: January 27, 2004
    Date of Patent: November 4, 2008
    Assignee: RMI Corporation
    Inventor: Paolo Narvaez
  • Patent number: 7447152
    Abstract: An apparatus for controlling traffic congestion includes: a transmitting processor including a packet classifying unit adapted to classify packets to be processed in a receiving processor and packets to be forwarded via the transmitting processor, the transmitting processor and the receiving processor having different traffic processing speeds; a buffer adapted to store the packets to be forwarded from the packet classifying unit to the receiving processor; and the receiving processor including a token driver adapted to output the packets stored in the buffer in accordance with a token bucket algorithm in response to an interrupt signal of the transmitting processor and to transmit the packets to a corresponding application, and a monitoring unit adapted to analyze and monitor a resource occupancy rate and a traffic characteristic used by the token driver to set an amount of tokens.
    Type: Grant
    Filed: December 13, 2004
    Date of Patent: November 4, 2008
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Bong-Cheol Kim, Byung-Gu Choe, Yong-Seok Park
  • Patent number: 7447229
    Abstract: A data network and a method for providing prioritized data movement between endpoints connected by multiple logical channels. Such a data network may include a first node comprising a first plurality of first-in, first-out (FIFO) queues arranged for high priority to low priority data movement operations; and a second node operatively connected to the first node by multiple control and data channels, and comprising a second plurality of FIFO queues arranged in correspondence with the first plurality of FIFO queues for high priority to low priority data movement operations via the multiple control and data channels; wherein an I/O transaction is accomplished by one or more control channels and data channels created between the first node and the second node for moving commands and data for the I/O transaction during the data movement operations, in the order from high priority to low priority.
    Type: Grant
    Filed: October 27, 2004
    Date of Patent: November 4, 2008
    Assignee: Intel Corporation
    Inventors: Greg J. Regnier, Jeffrey M. Butler, Dave B. Minturn
  • Patent number: 7430169
    Abstract: The decision within a packet processing device to transmit a newly arriving packet into a queue to await further processing or to discard the same packet is made by a flow control method and system. The flow control is updated with a constant period determined by storage and flow rate limits. The update includes comparing current queue occupancy to a threshold. The outcome of the update is adjustment up or down of the transmit probability value. The value is stored for the subsequent period of flow control and packets arriving during that period are subject to a transmit or discard decision that uses that value.
    Type: Grant
    Filed: June 3, 2002
    Date of Patent: September 30, 2008
    Assignee: International Business Machines Corporation
    Inventors: James Johnson Allen, Jr., Brian Mitchell Bass, Gordon Taylor Davis, Clark Debs Jeffries, Jitesh Ramachandran Nair, Ravinder Kumar Sabhikhi, Michael Steven Siegel, Rama Mohan Yedavalli
  • Patent number: 7424026
    Abstract: Disclosed is a device, a computer program and a method to receive and buffer data packets that contain information that is representative of time-ordered content, such as a voice signal, that is intended to be presented to a person in a substantially continuous and substantially uniform temporal sequence; to decode the information to obtain samples and to buffer the samples prior to generating a playout signal. The samples are time scaled as a function of packet network conditions to enable changing the play-out rate to provide a substantially continuous output signal when the data packets are received at a rate that differs from a rate at which the data packets are created. The time scaling operation operates with a base delay that is controlled in a positive sense when the data packets are received at a rate that is slower than a rate at which the data packets are created, and a reserve delay that is managed to provide insurance against an interruption should the base delay become negative.
    Type: Grant
    Filed: April 28, 2004
    Date of Patent: September 9, 2008
    Assignee: Nokia Corporation
    Inventor: Jani Mallila
  • Patent number: 7424027
    Abstract: A head of line blockage avoidance system for use with network systems that employ packets having an associated priority and a method of operation thereof. In one embodiment, the head of line blockage avoidance system includes: (1) m inputs, m numbering at least two, configured to receive the packets, (2) n packet first-in-first-out buffers (FIFOs), n numbering at least three, each of the packet FIFOs configured to receive at least one of the packets from the m inputs, (3) a priority summarizer configured to generate a priority summary of the packets within the m inputs and the n packet FIFOs, and (4) a scheduler configured to cause one of the n packet FIFOs to be queued for processing based on the priority summary.
    Type: Grant
    Filed: January 9, 2002
    Date of Patent: September 9, 2008
    Assignee: Lucent Technologies Inc.
    Inventor: David P. Sonnier
  • Patent number: 7408947
    Abstract: A system and method of scheduling packets or cells for a switch device that includes a plurality of input ports each having at least one input queue, a plurality of switch units, and a plurality of output ports. There is generated, by each input port having a packet or cell in its at least one queue, a request to output the corresponding packet or cell to each of the output ports to which a corresponding packet or cell is to be sent to, wherein the request includes a specific one of the plurality of switch units to be used in a transfer of the packet or cell from the corresponding input port to the corresponding output port, the specific one of the plurality of switch units being selected according to a first priority scheme. Access is granted, per output port per switch unit, to the request made, the granting being based on a second priority scheme. Grants are accepted per input port per switch unit, the accepting being based on a third priority scheme.
    Type: Grant
    Filed: January 6, 2005
    Date of Patent: August 5, 2008
    Assignee: Enigma Semiconductor
    Inventor: Jacob V. Nielsen
  • Patent number: 7382787
    Abstract: A method for routing and switching data packets from one or more incoming links to one or more outgoing links of a router. The method comprises receiving a data packet from the incoming link, assigning at least one outgoing link to the data packet based on the destination address of the data packet, and after the assigning operation, storing the data packet in a switching memory based on the assigned outgoing link. The data packet extracted from the switching memory, and transmitted along the assigned outgoing link. The router may include a network processing unit having one or more systolic array pipelines for performing the assigning operation.
    Type: Grant
    Filed: June 20, 2002
    Date of Patent: June 3, 2008
    Assignee: Cisco Technology, Inc.
    Inventors: Peter M. Barnes, Nikhil Jayaram, Anthony J. Li, William L. Lynch, Sharad Mehrotra
  • Patent number: 7382792
    Abstract: A queue scheduling mechanism in a data packet transmission system, the data packet transmission system including a transmission device for transmitting data packets, a reception device for receiving the data packets, a set of queue devices respectively associated with a set of priorities each defined by a priority rank for storing each data packet transmitted by the transmission device into the queue device corresponding to its priority rank, and a queue scheduler for reading, at each packet cycle, a packet in one of the queue devices determined by a normal priority preemption algorithm. The queue scheduling mechanism includes a credit device that provides at each packet cycle a value N defining the priority rank to be considered by the queue scheduler whereby a data packet is read by the queue scheduler from the queue device corresponding to the priority N instead of the queue device determined by the normal priority preemption algorithm.
    Type: Grant
    Filed: November 21, 2002
    Date of Patent: June 3, 2008
    Assignee: International Business Machines Corporation
    Inventors: Alain Blanc, Bernard Brezzo, Rene Gallezot, Francois Le Mauf, Daniel Wind
  • Patent number: 7379468
    Abstract: In a packet distributing apparatus and a packet distributing method which, in a situation in which a link can be allocated to a specific user or a specific traffic with priority, can effectively utilize any unoccupied link, a packet inputted via first through Nth the input interface circuits 2031 through 203N is provided by a transfer destination searching unit 205 with a transfer destination port and a transfer-admissible port. Then usage detecting units 2111 through 211N check the usage of a transfer-admissible port matching the pertinent one of first through Nth output buffer units 2091 through 209N, and distribute the packet to a transfer-admissible port bearing a less load than a prescribed level (less loaded transfer-admissible port), such as an unloaded port. If there is no such less loaded transfer-admissible port, the packet is distributed to the transfer destination port for which it is destined in its own right.
    Type: Grant
    Filed: July 16, 2002
    Date of Patent: May 27, 2008
    Assignee: NEC Corporation
    Inventor: Tsugio Okamoto
  • Publication number: 20080112424
    Abstract: A contention reduction apparatus and method in a prioritized contention access (PCA) method of a Wireless Personal Area Network are provided. A contention reduction apparatus of a Wireless Personal Area Network includes: a prioritized contention access (PCA) section retriever which retrieves a PCA section from beacons received for a beacon period (BP); a PCA section divider which divides the retrieved PCA section; a device selector which selects devices using the PCA section; and a PCA section allocator which allocates the divided PCA section to the selected devices.
    Type: Application
    Filed: March 29, 2007
    Publication date: May 15, 2008
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yong Suk Kim, Jun Seo Lee, Chang Woo Seo, Jun Haeng Cho
  • Patent number: 7369491
    Abstract: The invention discloses methods and apparatus for regulating the transfer of data bursts across a data network comprising electronic edge nodes interconnected by fast-switching optical core nodes. To facilitate switching at an electronic edge node, data bursts are organized into data segments of equal size. A data segment may include null data in addition to information bits. The null data are removed at the output of an edge node and the information data is collated into bursts, each carrying only information bits in addition to a header necessary for downstream processing. To ensure loss-free transfer of bursts from the edge to the core, burst transfer permits are generated at controllers of the optical core and sent to respective edge nodes based on flow-rate-allocation requests. Null-padding is not visible outside the edge nodes and only the information content is subject to transfer rate regulation to ensure high efficiency and high service quality.
    Type: Grant
    Filed: May 14, 2003
    Date of Patent: May 6, 2008
    Assignee: Nortel Networks Limited
    Inventors: Maged E. Beshai, Bilel N. Jamoussi
  • Patent number: 7362761
    Abstract: A packet processing apparatus that realizes improved overall relaying performance by distributing input packets to multiple packet analyzing modules for information processing is disclosed. The packet processing apparatus includes a distributor for assigning a sequence number to each of the input packets and distributing the numbered packets. The packet analyzing modules realize parallel execution of information analyzing processes on the packets distributed from the distributor. An order correction buffer rearranges the packets supplied from the packet analyzing modules in order according to the sequence numbers assigned to the packets and outputs the packets in the rearranged order.
    Type: Grant
    Filed: October 30, 2003
    Date of Patent: April 22, 2008
    Assignee: Fujitsu Limited
    Inventors: Kazuyuki Suzuki, Shirou Uriu
  • Patent number: 7352695
    Abstract: A switch at a transmission end of a system including a number of memory devices defining queues for receiving traffic to be switched, each queue having an associated predetermined priority classification, and a processor for controlling the transmission of traffic from the queues. The processor transmits traffic from the higher priority queues before traffic from lower priority queues. The processor monitors the queues to determine whether traffic has arrived at a queue having a higher priority classification than the queue from which traffic is currently being transmitted. The processor suspends the current transmission after transmission of the current minimum transmittable element if traffic has arrived at a higher priority queue, transmits traffic from the higher priority queue, and then resumes the suspended transmission. At a receiving end, a switch that includes a processor separates the interleaved traffic into output queues for reassembly of individual traffic streams from the data stream.
    Type: Grant
    Filed: February 8, 2001
    Date of Patent: April 1, 2008
    Assignee: ALCATEL
    Inventor: Bart Joseph Gerard Pauwels
  • Patent number: 7349419
    Abstract: The present invention relates to determining a queue size for a network router based on the number of voice channels capable of being handled by a particular output link and a desired failure probability for transmitting voice information over that output link. Since it is infeasible to use statistics to calculate the actual queue size, the queue size is approximated as follows. First, an initial queue size is determined based on the desired failure probability and a number of voice channels that is lower than the desired number of channels. This initial number of voice channels is within a range in which the queue, based on the desired failure probability, is calculated. From the initial queue size, the desired queue size is calculated by multiplying the initial queue size by a function that is a ratio of the desired number of voice channels to the initial number of voice channels.
    Type: Grant
    Filed: December 30, 2002
    Date of Patent: March 25, 2008
    Assignee: Nortel Networks Limited
    Inventor: Ian R. Philp
  • Patent number: 7339943
    Abstract: An apparatus is described that includes a plurality of queuing paths. Each of the queuing paths further comprises an input queue, an intermediate queue and an output queue. The input queue has an output coupled to an input of the intermediate queue and the input of the output queue. The intermediate queue has an output coupled to the input of the output queue. The intermediate queue receives data units from the input queue if a state of the input queue has reached a threshold. The output queue receives data units from the intermediate queue if the intermediate queue has data units. The output queue receives data units from the input queue if the intermediate queue does not have data units.
    Type: Grant
    Filed: May 10, 2002
    Date of Patent: March 4, 2008
    Assignee: Altera Corporation
    Inventors: Neil Mammen, Greg Maturi, Mammen Thomas
  • Patent number: 7336612
    Abstract: The present invention is directed to a high performance broadband ATM switching system comprised of concentrator, non-recirculating sort-trap and queuing stages. The concentrator stage concentrates cells entering the switch by discarding idle inputs thereto. Cells arriving at the non-recirculating sort-trap stage during a time slot are sorted based upon destination address and priority. Cells having a unique address for the time slot are placed in a corresponding output queue while cells having non-unique destination addresses are re-routed to a trap buffer which ages the cell until a subsequent time slot in which the destination address is unique.
    Type: Grant
    Filed: July 27, 2001
    Date of Patent: February 26, 2008
    Assignee: Sprint Communications Company L.P.
    Inventor: Saeeda Khankhel
  • Patent number: 7324537
    Abstract: In general, in one aspect, the disclosure describes a switching device that includes a plurality of ports. The ports operate at asymmetric speeds. The apparatus also includes a switching matrix to provide selective connectivity between the ports. The apparatus further includes a plurality of channels to connect the ports to the switching matrix. The number of channels associated with each port is determined by speed of the port.
    Type: Grant
    Filed: July 18, 2003
    Date of Patent: January 29, 2008
    Assignee: Intel Corporation
    Inventors: Ramaprasad Samudrala, Jaisimha Bannur, Anujan Varma
  • Patent number: 7324452
    Abstract: A system for managing data transmission from a number of queues employs a regular credit count and a history credit count for each queue. Generally, the regular credit count is used in arbitration. The count is decreased when data is transmitted from the queue and increased at given intervals if the queue is facing no transmission-blocking backpressure. The history credit count is increased in lieu of the regular credit count when transmission from the queue is blocked. Thus, the history credit count keeps track of potential transmission opportunities that would be lost due to the blocking of transmission from the queue. The history credit counts are periodically polled instead of the regular credit counts to give each queue an opportunity to catch up in its use of transmission opportunities.
    Type: Grant
    Filed: January 14, 2002
    Date of Patent: January 29, 2008
    Assignee: Fujitsu Limited
    Inventors: Hong Xu, Mark A. W. Stewart
  • Patent number: 7319670
    Abstract: An apparatus for transmitting to a network comprises a queue, packetization logic, interface logic, and queue logic. The packetization logic is configured to packetize data into a plurality of data packets and to store, to the queue, entries pointing to the data packets. The interface logic is configured to read the entries from the queue. The interface logic, for each of the read entries, is configured to retrieve one of the packets pointed to by the read entry and to transmit the retrieved packet to a network socket. The queue logic is configured to limit, based on a number of retransmission requests detected by the queue logic, a number of entries that the packetization logic may store to the queue during a particular time period thereby controlling a transmission rate of the apparatus.
    Type: Grant
    Filed: February 8, 2003
    Date of Patent: January 15, 2008
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey Joel Walls, Michael T Hamilton
  • Patent number: 7313147
    Abstract: A network device for transmitting data of a host system to a network including a buffer for storing the data, a first transmission interface providing a control signal to the host system, and a second transmission interface coupled to the buffer for transmitting the data from the buffer to the network. A data transmission method of the network device includes the following steps: providing a network device and a host system, wherein the network device is providing a control signal to the host system, and the network device immediately activates a frame transmission procedure after providing the control signal; the host system starting to transmit data to the network device after the host system receives the control signal; and the network device transmitting the data to the network after a frame transmission pre-procedure is finished and when the data of the host system are completely transmitted to the network device.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: December 25, 2007
    Assignee: Infineon-ADMtek Co., Ltd.
    Inventor: Sheng-Yuan Cheng
  • Patent number: 7298755
    Abstract: An apparatus for communicating with a network comprises a data packet pipeline and a monitoring element. The data packet pipeline is configured to transfer data between a buffer and a network socket. The monitoring element is configured to provide an indication of an operational performance parameter for at least one component of the data packet pipeline thereby enabling an operational problem within the pipeline may be isolated based on the indication.
    Type: Grant
    Filed: February 8, 2003
    Date of Patent: November 20, 2007
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey Joel Walls, Michael T Hamilton
  • Patent number: 7292578
    Abstract: A VTMS queue scheduler integrates traffic shaping and link sharing functions within a single mechanism and that scales to an arbitrary number of queues of an intermediate station in a computer network. The scheduler assigns committed information bit rate and excess information bit rate values per queue, along with a shaped maximum bit rate per media link of the station. The integration of shaping and sharing functions decreases latency-induced inaccuracies by eliminating a queue and feedback mechanism between the sharing and shaping functions of conventional systems.
    Type: Grant
    Filed: June 19, 2001
    Date of Patent: November 6, 2007
    Assignee: Cisco Technology, Inc.
    Inventors: Darren Kerr, Van Jacobson
  • Patent number: 7277446
    Abstract: Data packets are received at a communications node. Each of the received data packets is associated with one of a set of different service classes. Packets corresponding to the received data packets are transmitted to recipients. The order in which the data packets are transmitted is controlled based on the transmission rate and the service class of the packets.
    Type: Grant
    Filed: November 2, 2000
    Date of Patent: October 2, 2007
    Assignee: Airvana, Inc.
    Inventors: Firas Abi-Nassif, Dae-Young Kim, Pierre A. Humblet, M. Vedat Eyuboglu
  • Patent number: 7272149
    Abstract: A system for shaping traffic from a plurality of data streams includes a queuing stage having a plurality of first-in, first-out shaping queues, the queuing stage being configured to classify incoming entries of traffic, and to assign an incoming element of traffic to a selected queue of the first queuing stage depending on characteristics of the element, the queuing stage further being configured to allocate bandwidth to each of the queues using time division multiplexing. A method for shaping traffic from a plurality of data streams includes providing a plurality of first-in, first-out queues; assigning traffic to the queues depending on the characteristics of the traffic; and controlling traffic flow out of the queues using a bandwidth allocation table.
    Type: Grant
    Filed: August 19, 2002
    Date of Patent: September 18, 2007
    Assignee: World Wide Packets, Inc.
    Inventors: Keith Michael Bly, C Stuart Johnson
  • Patent number: 7272150
    Abstract: A system for shaping traffic from a plurality of data streams comprised of a first queuing stage configured to shape traffic from the data streams and having a plurality of shaping queues; and a second queuing stage coupled to the first queuing stage and configured to manage congestion from the first queuing stage that occurs when multiple of the shaping queues become eligible to send traffic at substantially the same time.
    Type: Grant
    Filed: August 19, 2002
    Date of Patent: September 18, 2007
    Assignee: World Wide Packets, Inc.
    Inventors: Keith Michael Bly, C Stuart Johnson
  • Patent number: 7260104
    Abstract: A method and apparatus for temporarily deferring transmission of packets/frames to a destination port in a buffered switch is disclosed. When a request for transmission of at least one packet/frame to the destination port is received, it is determined whether the destination port is available to receive the at least one packet/frame. The transmission of the at least one packet/frame is deferred when the destination port is not available to receive the at least one packet/frame. The packet/frame identifier and memory location for each deferred packet/frame is stored in a deferred queue and the process then repeats for the next packet/frame. Periodically, the apparatus attempts to transmit the packets/frames in the deferred queue to their respective destination ports.
    Type: Grant
    Filed: December 19, 2001
    Date of Patent: August 21, 2007
    Assignee: Computer Network Technology Corporation
    Inventor: Steven G. Schmidt
  • Patent number: 7249228
    Abstract: Mechanisms for reducing the number of block masks required for programming multiple access control lists in an associative memory are disclosed. A combined ordering of masks corresponding to multiple access control lists (ACLs) is typically identified, with the multiple ACLs including n ACLs. An n-dimensional array is generated, wherein each axis of the n-dimensional array corresponds to masks in their requisite order of a different one of the multiple ACLs. The n-dimensional array progressively identifies numbers of different masks required for subset orderings of masks required for subsets of the multiple ACLs. The n-dimensional array is traversed to identify a sequence of masks corresponding to a single ordering of masks including masks required for each of the multiple ACLs.
    Type: Grant
    Filed: March 1, 2004
    Date of Patent: July 24, 2007
    Assignee: Cisco Technology, Inc.
    Inventors: Amit Agarwal, Venkateshwar Rao Pullela, Qizhong Chen
  • Patent number: 7245615
    Abstract: The present invention comprises a technique for performing a reassembly assist function that enables a processor to perform packet reassembly in a deterministic manner. The technique employed by the present invention enables a processor to reassemble a packet without having to extend its normal processing time to reassemble a varying number of fragments into a packet. The invention takes advantage of the fact that the reassembly assist can be dedicated exclusively to reassembling a packet from a series of fragments and thereby offloading the reassembly process from the processor.
    Type: Grant
    Filed: October 30, 2001
    Date of Patent: July 17, 2007
    Assignee: Cisco Technology, Inc.
    Inventors: Kenneth H. Potter, Michael L. Wright, Hong-Man Wu
  • Patent number: 7242692
    Abstract: A method for coordinating packet transmission order for a plurality of registers of different priority levels is disclosed. Packets are transmitted from the registers according to the priority levels in a normal condition. A count value is generated in response to the transmitted packets. A particular priority level of one of the registers, from which a packet is being transmitted out, is recorded when the count value is larger than a predetermined threshold. Then the normal condition switches into a cleaning condition, and one packet is transmitted from each of the registers which are not empty and have priority levels lower than the particular priority level according to priority. Finally, reset the count value, and return to the normal condition. A device for coordinating packet transmission order for a plurality of registers of different priority levels is also disclosed.
    Type: Grant
    Filed: July 29, 2002
    Date of Patent: July 10, 2007
    Assignee: VIA Technologies, Inc.
    Inventors: Cheng-Yuan Wu, Stone Wei, Chih Hsien Weng
  • Patent number: 7236501
    Abstract: A packet header processing engine receives a header of a packet. The received header includes a size of the packet. A maximum transfer unit size of a destination interface of the packet may be determined. The packet header processing engine determines whether the size of the packet exceeds the maximum transfer unit size of the destination interface. If the size of the packet does not exceed the maximum transfer unit size of the destination interface, the packet header processing engine generates a new header from the received header. If the size of the packet exceeds the maximum transfer unit size of the destination interface, the packet header processing engine generates a fragment header from the received header. The packet header processing engine may recycle the fragment header for further processing in addition to forming a first fragment packet from the fragment header.
    Type: Grant
    Filed: March 22, 2002
    Date of Patent: June 26, 2007
    Assignee: Juniper Networks, Inc.
    Inventors: Raymond Marcelino Manese Lim, Jeffrey G. Libby
  • Patent number: 7221650
    Abstract: A system and method checks whether messages exchanged between first and second modules are being lost or gained. The first module has a request counter and a capture register. The second module has a request accumulator and a capture register. As the first module issues and receives messages, it increments and decrements its request counter. As the second module receives and issues messages, it increments and decrements its request accumulator. To check for lost or gained messages, the first module copies the current value of its request counter into its capture register, and issues a marker to the second module. The first module decrements its capture register in response to receiving post-marker messages, but does not increment its capture register. Upon receipt of the marker, the second module copies the current value of its request accumulator into its capture register, and returns the marker to the first module. When the first module receives the marker, it stops decrementing its capture register.
    Type: Grant
    Filed: December 23, 2002
    Date of Patent: May 22, 2007
    Assignee: Intel Corporation
    Inventors: Jeffrey L. Cooper, Henry Charles Benz
  • Patent number: 7203202
    Abstract: An exhaustive service dual round-robin matching (EDRRM) arbitration process amortizes the cost of a match over multiple time slots. It achieves high throughput under nonuniform traffic. Its delay performance is not sensitive to traffic burstiness, switch size and packet length. Since cells belonging to the same packet are transferred to the output continuously, packet delay performance is improved and packet reassembly is simplified.
    Type: Grant
    Filed: October 31, 2002
    Date of Patent: April 10, 2007
    Assignee: Polytechnic University
    Inventors: Hung-Hsiang Jonathan Chao, Yihan Li, Shivendra S. Panwar
  • Patent number: 7197044
    Abstract: A method for managing congestion in a stack of network switches includes the steps of receiving an incoming packet on a first port of a network switch for transmission to a destination port and determining if the destination port of the packet is a monitored port. Thereafter, the method determines a queue status of the destination port, if the destination port is determined to be a monitored port, and preschedules transmission of the incoming packet to the destination port if the destination port is determined to be a monitored port.
    Type: Grant
    Filed: March 17, 2000
    Date of Patent: March 27, 2007
    Assignee: Broadcom Corporation
    Inventors: Shiri Kadambi, Mohan Kalkunte, Shekhar Ambe
  • Patent number: 7184443
    Abstract: Methods and apparatus are disclosed for scheduling packets, such as in systems having a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces. In one implementation, multiple request generators, grant arbiters, and acceptance arbiters work in conjunction to determine this scheduling. A set of requests for sending packets from a particular input is generated. From a grant starting position, a first n requests in a predetermined sequence are identified, where n is less than or equal to the maximum number of connections that can be used in a single packet time to the particular output. The grant starting position is updated in response to the first n grants including a particular grant corresponding to a grant advancement position. In one embodiment, the set of grants generated based on the set of requests is similarly determined using an acceptance starting position and an acceptance advancement position.
    Type: Grant
    Filed: March 30, 2002
    Date of Patent: February 27, 2007
    Assignee: Cisco Technology, Inc.
    Inventors: Flavio Giovanni Bonomi, Patrick A. Costello, Robert E. Brandt
  • Patent number: 7170903
    Abstract: Arbitration for a switch fabric (e.g., an input-buffered switch fabric) is performed. For a first port, a link subset from a set of links associated with the first port is determined. Each link from the link subset is associated with its own candidate packet and is associated with its own weight value. A link from the link subset for the first port is selected based on the weight value associated with each link from the link subset for the first port. For a second port, a link subset from a set of links associated with the second port is determined. Each link from the link subset associated with the second port is associated with its own candidate packet and is associated with its own weight value. The determining for the second port is performed in parallel with the determining for the first port. A link from the link subset for the second port is selected based on the weight value associated with each link from the link subset of associated with the second port.
    Type: Grant
    Filed: December 30, 2003
    Date of Patent: January 30, 2007
    Assignee: Altera Corporation
    Inventors: Mehdi Alasti, Kamran Sayrafian-Pour, Vahid Tabatabaee
  • Patent number: 7161906
    Abstract: A switch fabric for routing data has a switching stage configured between an input stage and an output stage. The input stage forwards the received data to the switching stage, which routes the data to the output stage, which transmits the data towards destinations. In one aspect, at least one input port can be programmably configured to store data in two or more input routing queues that are associated with a single output port, and at least one output port can be programmably configured to receive data from two or more output routing queues that are associated with a single input port. In another aspect, the output stage transmits status information about the output stage to the input stage, which uses the status information to generate bids to request connections through the switching stage.
    Type: Grant
    Filed: December 14, 2001
    Date of Patent: January 9, 2007
    Assignee: Agere Systems Inc.
    Inventors: Martin S. Dell, Zbigniew M. Dziong, Wei Li, Yu-Kuen Ouyang, Matthew Tota, Yung-Terng Wang
  • Patent number: 7161950
    Abstract: A switch and a process of operating a switch are described where a received data frame is stored into memory in a systematic way. In other words, a location is selected in the memory to store the received data frame using a non-random method. By storing the received data frame in this way, switches that employ this system and method increase bandwidth by avoiding delays incurred in randomly guessing at vacant spaces in the memory. The received data frame is stored until a port that is to transmit the received data frame is available. Throughput is further improved by allowing the received data frames to be stored in either contiguous or non-contiguous memory locations.
    Type: Grant
    Filed: December 10, 2001
    Date of Patent: January 9, 2007
    Assignee: Intel Corporation
    Inventor: Rahul Saxena
  • Patent number: 7158527
    Abstract: Example embodiments of protocol multiplexing systems comprise a multiplexer which receives multiplexed packet(s) and which uses contents of the multiplexed packets to form carrying packets which are stored in an output buffer. Some of the multiplexed packets belong to differing ones of plural virtual channels, but the multiplexer uses multiplexed packet(s) belonging to only one virtual channel to form a given carrying packet. The multiplexing systems accommodate transmission on a same virtual path of numerous connections belonging to differing virtual channels, balancing both payload efficiency and delay considerations.
    Type: Grant
    Filed: April 2, 2002
    Date of Patent: January 2, 2007
    Assignee: Telefonaktiebolaget Lmeericsson (publ)
    Inventors: Szabolcs Malomsoky, Szilveszter Nádas, Sándor Rácz
  • Patent number: 7145868
    Abstract: A method and system for detecting and controlling congestion in a multi-port shared memory switch in a communications network. The proposed congestion management scheme implements a local and a global congestion monitoring process. The local monitoring process monitors the queue depth. When the queue depth for any queue exceeds a queue length threshold a congestion control mechanism is implemented to limit incoming data traffic destined for that queue. Additionally, the global congestion monitoring process monitors the shared memory buffer and if the traffic thereto exceeds a shared memory buffer threshold a congestion control mechanism limits incoming traffic destined for any output queue which has been exceeding a fair share threshold value.
    Type: Grant
    Filed: November 28, 1997
    Date of Patent: December 5, 2006
    Assignee: Alcatel Canada Inc.
    Inventors: Natalie Giroux, Mustapha Aïssaoui