Having Both Input And Output Queuing Patents (Class 370/413)
  • Publication number: 20100061324
    Abstract: A first device (124) in a High Speed Downlink Packet Access environment (100) may generate a High Speed Down-link Shared Channel data frame (700, 730, 735, 750) that includes a group of packet data units, where a first packet data unit of the group of packet data units is of a different length than a second packet data unit of the group of packet data units. The first device (124) may further transfer the High Speed Downlink Shared Channel data frame (700, 730, 735, 750) to a second device (122).
    Type: Application
    Filed: January 7, 2008
    Publication date: March 11, 2010
    Applicant: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Min Liao, Anna Larmo, Peter Lundh, Szilveszter Nadas, Ina Widegren
  • Publication number: 20100054270
    Abstract: A buffer temporarily stores data received from a network by a receiving unit. An output mode switching unit switches the mode in which the data received by the receiving unit is output to the buffer, between FIFO and FILO, in accordance with the storage amount of data temporarily stored in the buffer. For example, if the data temporarily stored in the buffer falls below a given threshold value of the buffer, data is stored in the buffer in FIFO. If the data temporarily stored in the buffer exceeds a given threshold value of the buffer, data is stored in the buffer in FILO. A sending unit outputs data taken from the buffer in FIFO or FILO, to a network.
    Type: Application
    Filed: November 6, 2009
    Publication date: March 4, 2010
    Applicant: FUJITSU LIMITED
    Inventor: Atsushi Shinozaki
  • Patent number: 7668890
    Abstract: Prefix searches for directing internet data packets are performed in a prefix search integrated circuit. The integrated circuit includes an array of search engines, each of which accesses a prefix search tree data structure to process a prefix search. An SDRAM is dedicated to each search engine, and SDRAMs share address and control pins to plural search engines on the IC chip. Internal nodes of the tree data structure are duplicated across banks of the SDRAMs to increase bandwidth, and leaf nodes are stored across the SDRAM banks to reduce storage requirements. Within each search engine, data stored in a data register from an SDRAM is compared to a prefix search key stored in a key register. Based on that comparison, an address is calculated to access further tree structure data from the SDRAM. Packet descriptors containing search keys are forwarded to the search engines from an input queue and the search results are forwarded to an output queue, the same packet order being maintained in the two queues.
    Type: Grant
    Filed: October 18, 2006
    Date of Patent: February 23, 2010
    Assignee: FutureWei Technologies, Inc.
    Inventors: Gregory M. Waters, Larry R. Dennison, Philip P. Carvey, William J. Dally, William F. Mann
  • Patent number: 7664127
    Abstract: A method of resolving mutex contention within a network interface unit which includes providing a plurality of memory access channels, and moving a thread via at least one of the plurality of memory access channels, the plurality of memory access channels allowing moving of the thread while avoiding mutex contention when moving the thread via the at least one of the plurality of memory access channels is disclosed.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: February 16, 2010
    Assignee: Sun Microsystems, Inc.
    Inventors: Ariel Hendel, Michael Wong, Yatin Gajjar, Shimon Muller
  • Publication number: 20100034213
    Abstract: The invention allows data originating according to a first communications standard to be transmitted over a physical layer using a second communications standard. According to an embodiment of the invention, a data stream is received from a physical transmission medium that uses particular first communications standard. Next, a data type identification (DTID) is appended to each byte in the data stream, thereby creating a technology independent data stream having a particular bit rate. This bit rate is then matched to a different bit rate that corresponds to a second communications standard. The technology independent data stream is then transmitted over a physical transmission medium that uses the second communications standard.
    Type: Application
    Filed: October 19, 2009
    Publication date: February 11, 2010
    Applicant: Broadcom Corporation
    Inventors: Kevin BROWN, Richard G. Thousand
  • Patent number: 7660322
    Abstract: In a first aspect, a first method is provided for sharing a multiple queue Ethernet adapter. The first method includes the steps of receiving a frame or packet in the adapter and determining whether the frame or packet is for one or more of a plurality of partitions that share the adapter. If the frame or packet is for one or more of the plurality of partitions that share the adapter, the method further includes (1) storing the frame or packet in an adapter cache memory; (2) determining one or more of the plurality of partitions to which the frame or packet is to be sent; and (3) transferring the frame or packet from the adapter cache memory to a receive queue of each of the one or more partitions to which the frame or packet is to be sent. Numerous other aspects are provided.
    Type: Grant
    Filed: December 11, 2003
    Date of Patent: February 9, 2010
    Assignee: International Business Machines Corporation
    Inventors: Harvey G. Kiel, Lee A. Sendelbach
  • Patent number: 7639704
    Abstract: The message switching system (51) comprises at least two inputs (52, 53, 54, 55) and at least one output (56), first arbitration means (62) dedicated to said output (56), and management means (64) designed to determine a relative order OR(i,j) of one input relative to the other, for any pair of separate inputs belonging to the system (51) and having sent requests for the assignment of said output (56), and designed to assign said output (56). Said management means (64) comprise storage means (70) designed to store said relative orders OR(i,j), initialization means (66) designed to initialize said relative orders OR(i,j) such that only one of said inputs takes priority on initialization, and updating means (68) designed to update all of said relative orders when a new request arrives at said first arbitration means (62), or when said output is assigned to one of said inputs.
    Type: Grant
    Filed: July 11, 2006
    Date of Patent: December 29, 2009
    Assignee: Arteris
    Inventors: Philippe Boucard, Luc Montperrus
  • Patent number: 7639679
    Abstract: To selectively route stand-by packets in input modules to destination output modules via a switching matrix, distributed arbitration functions are executable by successive arbitration cycles. Each cycle comprises: a first phase executable by each input controller to send each output controller requests representative of the quantities of required stand-by packets; a second phase executable by each output controller to determine the quantity of admissible packets depending on the requests; a third phase executable by a central arbitration unit to determine allowed aggregate quantities depending on all the admissible quantities; and a fourth phase executable by each input controller to determine the allowed packet quantities depending on the admissible quantities and of the allowed aggregate quantities.
    Type: Grant
    Filed: September 29, 2006
    Date of Patent: December 29, 2009
    Assignee: Alcatel
    Inventors: Ludovic Noirie, Georg Post, Silvio Cucchi, Fabio Valente
  • Patent number: 7639707
    Abstract: A variable size first in first out (FIFO) memory is disclosed. The variable size FIFO memory may include head and tail FIFO memories operating at a very high data rate and an off chip buffer memory. The off chip buffer memory may be, for example, of a dynamic RAM type. The off chip buffer memory may temporarily store data packets when both head and tail FIFO memories are full. Data blocks of each of the memories may be the same size for efficient transfer of data. After a sudden data burst which causes memory overflow ceases, the head and tail FIFO memories return to their initial functions with the head FIFO memory directly receiving high speed data and transmitting it to various switching element and the tail FIFO memory storing temporary overflows of data from the head FIFO memory.
    Type: Grant
    Filed: September 27, 2005
    Date of Patent: December 29, 2009
    Inventor: Chris Haywood
  • Patent number: 7636367
    Abstract: A method and apparatus for overbooking FIFO memory have been disclosed.
    Type: Grant
    Filed: May 31, 2006
    Date of Patent: December 22, 2009
    Assignee: Integrated Device Technology, Inc.
    Inventor: Sibing Wang
  • Patent number: 7633865
    Abstract: A technique for controlling a packet data network to maintain network stability and efficiently utilize network resources through mechanisms involving per-destination queues and urgency weights for medium access control. The technique jointly controls congestion, scheduling, and contention resolution on hop-by-hop basis, such that the length of queues of packets at a node does not become arbitrarily large. In one embodiment, queue lengths and urgency weights may be transmitted and received via medium access control messages.
    Type: Grant
    Filed: January 19, 2007
    Date of Patent: December 15, 2009
    Assignee: Alcatel-Lucent USA Inc.
    Inventors: Daniel Matthew Andrews, Piyush Gupta, Iraj Saniee, Aleksandr Stolyar
  • Patent number: 7623525
    Abstract: In a communication system using multicasting, multicast packets are forwarded through a switch by destination ports after these ports receive the packets. A source node sends the multicast packet to a subset of nodes within the multicast group, which in turn, forward the multicast packet to other subsets of packets within the multicast group that have yet to receive the information. This is continued until all ports within the multicast group have received the information.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: November 24, 2009
    Assignee: AT&T Corp.
    Inventor: Aleksandra Smiljanic
  • Patent number: 7620059
    Abstract: A method and a fiber channel switch element for processing receive-modify-send (“RMS”) frames in a fiber channel network are provided. The method includes, determining if a received frame is a RMS frame; modifying the RMS frame without copying the RMS frame to a transmit buffer; and transmitting the modified frame. The RMS frame is modified in a receive buffer before being sent to the transmit buffer and a port state machine controls the receive buffer where RMS frames are modified. The switch element includes a port having a state machine that determines if a received frame needs to be modified before being transmitted, and if the frame is to be modified then such modification occurs in a receive buffer without being copied to a transmit buffer before such modification. A buffer select logic selects the appropriate buffer for modifying and transmitting frames from.
    Type: Grant
    Filed: July 12, 2004
    Date of Patent: November 17, 2009
    Assignee: QLOGIC, Corporation
    Inventors: Melanie A Fike, William J. Wen
  • Publication number: 20090279560
    Abstract: Techniques are given for determining the data transmission or sending rates in a router or switch of two or more input queues in one or more input ports sharing an output port, which may optionally include an output queue. The output port receives desired or requested data from each input queue sharing the output port. The output port analyzes this data and sends feedback to each input port so that, if needed, the input port can adjust its transmission or sending rate.
    Type: Application
    Filed: July 21, 2009
    Publication date: November 12, 2009
    Inventors: Jason A. Jones, Michael T. Guttman, Max S. Tomlinson, JR.
  • Patent number: 7606203
    Abstract: An apparatus for and method of packet loss measurement for TLS connections in an MEN that overcomes the problems of the prior art. The mechanism is operative to ensure that only one copy of each received packet is counted when exiting the TLS connection (unless dropped along the path due to congestion, etc.), so that the number of egress packets counted can be accurately compared with the number of ingress packets counted. Only a single copy of each packet is marked, i.e. flagged, at ingress. Any bridging along the path that needs to duplicate the packet forwards only a single marked copy of the packet. All other copies are forwarded unmarked. At the egress from the TLS, only marked packets are counted by the egress counter. In a second embodiment, a duplication field is inserted into each packet to better track all the duplicate copies of a packet.
    Type: Grant
    Filed: July 30, 2003
    Date of Patent: October 20, 2009
    Assignee: Atrica Israel Ltd.
    Inventors: Lior Shabtay, Sergei Kaplan
  • Patent number: 7602798
    Abstract: Techniques for accelerating network receive side processing of packets. Packets may be associated into flow groupings and stored in flow buffers. Packet headers that are available for TCP/IP processing may be provided for processing. If a payload associated with a header is not available for processing then a descriptor associated with the header is tagged as indicating the payload is not available for processing.
    Type: Grant
    Filed: August 27, 2004
    Date of Patent: October 13, 2009
    Assignee: Intel Corporation
    Inventors: John Ronciak, Christopher Leech, Prafulla Deuskar, Jesse Brandeburg, Patrick Connor
  • Patent number: 7602783
    Abstract: A packet switch for switching variable length packets, where each output port interface includes a buffer memory for storing transmission packets, a transmission priority controller for classifying, based on a predetermined algorithm, transmission packets passed from a packet switching unit into a plurality of queue groups to which individual bandwidths are assigned respectively, and queuing the transmission packets in the buffer memory so as to form a plurality of queues according to transmission priority in each of the queue groups and a packet read-out controller for reading out the transmission packets from each of the queue groups in the buffer memory according to the order of transmission priority of the packets while guaranteeing the bandwidth assigned to the queue group.
    Type: Grant
    Filed: July 18, 2002
    Date of Patent: October 13, 2009
    Assignee: Hitachi, Ltd.
    Inventor: Takeshi Aimoto
  • Patent number: 7602709
    Abstract: A network device implements congestion management of sessions of a network protocol. In one implementation, an incoming request component receives session requests for a negotiation session between the network device and a second network device. A capacity pool stores a value relating to capacity of the network device to continue to efficiently process the session requests. New sessions are initiated when the value stored in the capacity pool is less than an estimate of the capacity of the network device at which the network device maximizes processor usage while minimizing session timeouts.
    Type: Grant
    Filed: November 17, 2004
    Date of Patent: October 13, 2009
    Assignee: Juniper Networks, Inc.
    Inventors: Yonghui Cheng, Choung-Yaw Shieh
  • Patent number: 7603429
    Abstract: A network interface adapter includes a network interface and a client interface, for coupling to a client device so as to receive from the client device work requests to send messages over the network using a plurality of transport service instances. Message processing circuitry, coupled between the network interface and the client interface, includes an execution unit, which generates the messages in response to the work requests and passes the messages to the network interface to be sent over the network. A memory stores records of the messages that have been generated by the execution unit in respective lists according to the transport service instances with which the messages are associated. A completion unit receives the records from the memory and, responsive thereto, reports to the client device upon completion of the messages.
    Type: Grant
    Filed: January 11, 2006
    Date of Patent: October 13, 2009
    Assignee: Mellanox Technologies Ltd.
    Inventors: Michael Kagan, Dieo Crupnicoff, Gilad Shainer, Ariel Shahar
  • Patent number: 7602720
    Abstract: Novel methods and devices are provided for AQM of input-buffered network devices. Preferred implementations of the invention control overall buffer occupancy while protecting uncongested individual VOQs. The probability of setting a “global drop flag” (which is not necessarily used to trigger packet drops, but may also be used to trigger other AQM responses) may depend, at least in part, on the lesser of a running average of buffer occupancy and instantaneous buffer occupancy. In some preferred embodiments, this probability also depends on the number of active VOQs. Moreover, a global drop flag is set in conjunction with a drop threshold M associated with the VOQs. Whether an AQM response is made may depend on whether a global drop flag has been set and whether a destination VOQ contains M or more packets. Different M values may be established for different classes of traffic, e.g., with higher M values for higher-priority traffic. AQM responses (e.g.
    Type: Grant
    Filed: June 16, 2005
    Date of Patent: October 13, 2009
    Assignee: Cisco Technology, Inc.
    Inventors: Davide Bergamasco, Flavio Bonomi, Valentina Alaria, Andrea Baldini
  • Patent number: 7599381
    Abstract: Eligible entries are scheduled using an approximated finish delay identified for an entry based on an associated speed group. One implementation maintains schedule entries, each respectively associated with a start time and a speed group. Each speed group is associated with an approximated finish delay. An approximated earliest finishing entry from the eligible schedule entries is determined that has an earliest approximated finish time, with the approximated finish time of an entry being determined based on the entry's start time and the approximated finish delay of the associated speed group. The scheduled action corresponding to the approximated earliest finishing entry is then typically performed. The action performed may, for example, correspond to the forwarding of one or more packets, an amount of processing associated with a process or thread, or any activity associated with an item.
    Type: Grant
    Filed: December 23, 2004
    Date of Patent: October 6, 2009
    Assignee: Cisco Technology, Inc.
    Inventors: Doron Shoham, Christopher J. Kappler, Anna Charny, Earl T. Cohen, Robert Olsen
  • Patent number: 7593417
    Abstract: An access point is to handle received broadcast or multicast traffic as multiple instances of unicast traffic, where each instance is destined for a corresponding wireless client device associated with the access point. A client device may adjust its listen interval parameter according to predefined considerations, for example a charge level of a battery to power the client device and an expected usage model for the device. A client device may initiate a reassociation request to inform the access point of its adjusted listen interval parameter.
    Type: Grant
    Filed: January 21, 2005
    Date of Patent: September 22, 2009
    Assignee: Research In Motion Limited
    Inventors: James Wang, Tom Nagy
  • Patent number: 7590058
    Abstract: A terminal adapter for guaranteeing the quality of service of both voice and data packets is disclosed. When a data packet is received in a first data input queue of a terminal adapter, a determination is made whether a voice packet is present in a voice input queue. Another determination is made as to whether the sum of the size of the data packet and the size of all packets in a terminal adapter output queue would exceed a first size threshold established for the output queue. If voice packets are present in the voice input queue, or if the aforementioned sum exceeds the size threshold, the data packet is not forwarded to the output queue. If no voice packets are present in the voice input queue and if the aforementioned sum is below the first size threshold, then the data packet is forwarded to the output queue.
    Type: Grant
    Filed: December 1, 2004
    Date of Patent: September 15, 2009
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Ali M. Cherchali, Gagan Lal Choudhury, Marius Jonas Gudelis, Robert J. McLaughlin
  • Patent number: 7583675
    Abstract: An apparatus and method for multiplexing voice packets to an ATM cell in an ATM network. Voice packets received from voice users to which voice calls have been connected are stored in a queue. An output rate controller determines a maximum number of ATM cells that can be transmitted during a predetermined control period according to information about the voice calls. When a voice packet is received at the queue, or when a predetermined operation period is reached, the output rate controller compares a number of ATM cells transmitted during the control period with the maximum number of ATM cells. If the number of transmitted ATM cells is less than the maximum number of ATM cells, the output rate controller requests a multiplexer to output an ATM cell. Then the multiplexer forms an ATM cell from stored voice packets by multiplexing.
    Type: Grant
    Filed: November 27, 2002
    Date of Patent: September 1, 2009
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Sung-Won Lee
  • Patent number: 7583683
    Abstract: A router using distributed processing for FIB look-up and fair queuing algorithm is invented. The real-time traffic includes voice and video and should be transmitted in a certain time limit. Otherwise, the quality of the traffic is affected and the information is no longer useful. Packet scheduler in the router transmits packets within the time limit. However, the packet scheduler is not fast enough compared to the link speed and the size of the router. This invention uses a plurality of processors and almost identical time for each processor. FIB look-up and switching are performed by different processors to reduce the processing time. The traffic control algorithm can be performed independently by each processor. Thus, the processing speed of the entire router can be raised.
    Type: Grant
    Filed: May 9, 2005
    Date of Patent: September 1, 2009
    Assignee: Yang Soon Cha
    Inventor: Kicheon Kim
  • Patent number: 7577159
    Abstract: A time-series data management method, a time-series data management device, a time-series data management system, and a time-series data management program capable of eliminating overhead due to exclusive access control and improving the efficient use of a buffer as well as enabling the start position of the oldest or nearly oldest data to be specified easily. A buffer for storing time-series data from a plurality of time-series data generators is divided into a plurality of buffer elements to manage time-series data. The buffer elements are dynamically allocated to the respective time-series data generators.
    Type: Grant
    Filed: September 8, 2005
    Date of Patent: August 18, 2009
    Assignee: NEC Corporation
    Inventor: Takashi Horikawa
  • Patent number: 7567508
    Abstract: A method and system for providing delay bound and prioritized packet dropping are disclosed. The system limits the size of a queue configured to deliver packets in FIFO order by a threshold based on a specified delay bound. Received packets are queued if the threshold is not exceeded. If the threshold is exceeded, a packet having a precedence level less than that of the precedence level of the received packet is dropped. If all packets in the queue have a precedence level greater than that of the packet received, then the received packet is dropped if the threshold is exceeded.
    Type: Grant
    Filed: May 23, 2005
    Date of Patent: July 28, 2009
    Assignee: Cisco Technology, Inc.
    Inventors: Anna Charny, Christopher Kappler, Sandeep Bajaj, Earl T. Cohen
  • Publication number: 20090175287
    Abstract: A virtual output queuing controlling device in an input buffering switch with a virtual output queuing technique includes a specialized class for CBR traffic, and a connection request generation section that makes a connection request for a switch scheduler, which can execute a three-step priority control. The connection request generation section makes the connection request of the specialized class for CBR traffic prior to the connection request of the other classes for the switch scheduler.
    Type: Application
    Filed: March 5, 2009
    Publication date: July 9, 2009
    Applicant: NEC Corporation
    Inventor: Satoshi Kamiya
  • Patent number: 7558279
    Abstract: Disclosed is a method for preventing transmission delay occurring when transmitting packet data to a destination in a transmission module including a first memory having a first memory area for storing packet data to be transmitted and a second memory area for backing up the first memory area, a second memory connected to a transmission link, and a frame processor. The method comprises storing a copy of the packet data stored in the second memory area according to a state of the second memory; deleting the packet data stored in the second memory area if copy of the packet data stored in the second memory is transmitted; determining whether packet data combining is possible according to a state of the second memory area; and if the packet data combining is possible, combining the packet data stored in the first memory area with the packet data stored in the second memory area and storing the combined packet data in the second memory.
    Type: Grant
    Filed: May 21, 2004
    Date of Patent: July 7, 2009
    Assignees: Samsung Electronics Co., Ltd., Seoul National University Industry Foundation
    Inventors: Hyo-Sun Hwang, Sung-Hyun Choi, Kyung-Hun Jang, Young-Soo Kim
  • Publication number: 20090168794
    Abstract: A method and system for communicating between two independent software components of the SideShow device are disclosed. Specifically, one embodiment of the present invention sets forth a method, which includes the steps of independently queuing an incoming packet from a second software component via an emulated serial transport in a first software component before parsing and responding to the incoming packet and independently queuing an outgoing packet in the first software component before transmitting the outgoing packet to the second software component also via the emulated serial transport.
    Type: Application
    Filed: December 27, 2007
    Publication date: July 2, 2009
    Inventor: Yu-Fong CHO
  • Patent number: 7555002
    Abstract: An aliased queue pair is provided within a logically partitioned data processing system for each logical partition for the single general services management queue pair that exists within a physical host channel adapter. Packets intended for the logical ports are received at the physical port. Multiple partitions exist within the data processing system. When one of these partitions needs to use one of the logical ports, a queue pair is selected. The queue pair is then associated with the logical port. The queue pair is configured as an aliased general services management queue pair and is used by the partition as if the aliased queue pair were the single general services management queue pair provided in the channel adapter.
    Type: Grant
    Filed: November 6, 2003
    Date of Patent: June 30, 2009
    Assignee: International Business Machines Corporation
    Inventors: Richard Louis Arndt, Bruce Leroy Beukema, David F. Craddock, Ronald Edward Fuhs, Thomas Anthony Gregg, Calvin Charles Paynton, Steven L. Rogers, Donald William Schmidt, Bruce Marshall Walk
  • Patent number: 7551636
    Abstract: A method for buffering variable length data at a decoupler includes receiving, at a decoupler, a request to queue variable length data from a producer, with the decoupler comprising a management header and a buffer pool. One of a plurality of fixed-length segments in the buffer pool is dynamically selected based, at least in part, on the management header. Buffer space in the selected segment is automatically allocated based on the request and the allocated space is then populated with the variable length data.
    Type: Grant
    Filed: July 9, 2004
    Date of Patent: June 23, 2009
    Assignee: Computer Associates Think, Inc.
    Inventor: Peter E. Morrison
  • Publication number: 20090147796
    Abstract: To avoid under-run conditions that result in corrupt packets at I/O interfaces, a FIFO buffer controller monitors key aspects of the contents of FIFO buffers of I/O interfaces. The FIFO buffer controller initiates transmission of data from the FIFO buffer when at least one complete packet is stored in the FIFO buffer or when the size of a partial packet stored therein is large enough so that the remainder of the packet would normally be received by the FIFO buffer before the stored part can be transmitted from the FIFO buffer; thereby avoiding an under-run error condition.
    Type: Application
    Filed: December 10, 2007
    Publication date: June 11, 2009
    Applicant: ALCATEL LUCENT
    Inventor: Joey Chow
  • Patent number: 7546367
    Abstract: Methods and systems for managing network traffic by multiple constraints are provided. A received network packet is assigned to multiple queues where each queue is associated with a different constraint. A network packet traverses the queues once it satisfies the constraint associated with that queue. Once the network packet has traversed all its assigned queues the network packet is forwarded to its destination. Also, network activity, associated with higher priority applications which are not managed, is detected. When such activity is detected, additional constraints are added to the network packet as it traverses its assigned queues.
    Type: Grant
    Filed: January 15, 2004
    Date of Patent: June 9, 2009
    Assignee: Novell, Inc.
    Inventor: Jamshid Mahdavi
  • Patent number: 7545808
    Abstract: A network device switches variable length data units from a source to a destination in a network. An input port receives the variable length data unit and a divider divides the variable length data unit into uniform length data units for temporary storage in the network device. A distributed memory includes a plurality of physically separated memory banks addressable using a single virtual address space and an input switch streams the uniform length data units across the memory banks based on the virtual address space. The network device further includes an output switch for extracting the uniform length data units from the distributed memory by using addresses of the uniform length data units within the virtual address space. The output switch reassembles the uniform length data units to reconstruct the variable length data unit. An output port receives the variable length data unit and transfers the variable length data unit to the destination.
    Type: Grant
    Filed: September 15, 2005
    Date of Patent: June 9, 2009
    Assignee: Juniper Networks, Inc.
    Inventors: Pradeep S. Sindhu, Dennis C. Ferguson, Bjorn O. Liencres, Nalini Agarwal, Hann-Hwan Ju, Raymond Marcelino Manese Lim, Rasoul Mirzazadeh Oskouy, Sreeram Veeragandham
  • Patent number: 7542473
    Abstract: A scheduling apparatus for a switch includes multiple schedulers which are assigned in a variety of ways to non-intersecting control domains for establishing connections through the switch. The control domains are defined by spatial and temporal aspects. The control domains may be dynamically selected and assigned to schedulers in a manner that achieves a high throughput gain. Control domains may be considered in a cyclic and/or a pipeline discipline for accommodating connection requests. The invention enables the realization of a highly scalable controller of a switching node of fine granularity that scales to capacities of the order of hundreds of terabits per second.
    Type: Grant
    Filed: December 2, 2004
    Date of Patent: June 2, 2009
    Assignee: Nortel Networks Limited
    Inventor: Maged E. Beshai
  • Patent number: 7525917
    Abstract: A traffic control system and method with flow control aggregation. The system includes a switching fabric and an ingress module. The switching fabric includes read counters that are associated with a plurality of queues. The read counters represent an aggregated number of cells dequeued from respective queues since a previous flow control message (FCM) was sent to the ingress module. The read counters are reset when a FCM is created. The ingress module includes write counters that are associated with the queues. The write counters are incremented each time a cell is sent to the respective queues. The write counters are decremented in accordance with the FCM when the FCM is received. Also, read counters for one or more queues are aggregated into a single FCM.
    Type: Grant
    Filed: June 4, 2003
    Date of Patent: April 28, 2009
    Assignee: Acatel-Lucent USA Inc.
    Inventors: Philip Ferolito, Eric Anderson, Gregory S. Mathews
  • Patent number: 7525958
    Abstract: A method and apparatus for two-stage packet classification. In the first stage, which may be implemented in software, a packet is classified on the basis of the packet's network path and, perhaps, its protocol. In the second stage, which may be implemented in hardware, the packet is classified on the basis of one or more transport level fields of the packet. An apparatus of two-stage packet classification may include a processing system for first stage code execution, a classification circuit for performing the second stage of classification, and a memory to store a number of bins, each bin including one or more rules.
    Type: Grant
    Filed: April 8, 2004
    Date of Patent: April 28, 2009
    Assignee: Intel Corporation
    Inventors: Alok Kumar, Michael E. Kounavis, Raj Yavatkar, Prashant R Chandra, Sridhar Lakshmanamurthy, Chen-Chi Kuo, Harrick M. Vin
  • Publication number: 20090103555
    Abstract: A network device receives a label-switched-path (LSP) labeled data packet, maps the LSP labeled data packet to an input queue, maps a data packet in the input queue to an output queue based on a received LSP label value and a received exp label value, and transmits the LSP labeled data packet from the output queue.
    Type: Application
    Filed: October 22, 2007
    Publication date: April 23, 2009
    Applicant: Verizon Services Organization Inc.
    Inventor: Nabil N. Bitar
  • Publication number: 20090103556
    Abstract: A data switch for an integrated circuit comprising at least one link for receiving input data packets from an independently modulated spread spectrum clock (SSC) enabled source having predetermined spread spectrum link clock frequency characteristics, and at least one output for transmitting the data packets after passage through the switch, the switch further comprising at least one receive buffer having a link side and a core side for receiving the SSC modulated input data packets from the link, at least one transmit buffer and a core clock, wherein the core clock operates at a given frequency between predetermined error limits determined by oscillation accuracy alone and is not SSC-enabled, the core clock frequency being set at a level at least as high as the highest link clock frequency such that the receive buffer cannot be filled faster from its link side than it can be emptied from its core side.
    Type: Application
    Filed: October 14, 2008
    Publication date: April 23, 2009
    Applicant: VirtenSys Limited
    Inventors: Finbar Naven, John Roger Drewry
  • Patent number: 7522623
    Abstract: A method and system for transferring iSCSI protocol data units (“PDUs”) to a host system is provided. The HBA includes, a direct memory access engine operationally coupled to a pool of small buffers and a pool of large buffers, wherein an incoming PDU size is compared to the size of a small buffer and if the PDU fits in the small buffer, then the PDU is placed in the small buffer. The incoming PDU size is compared to a large buffer size and if the incoming PDU size is less than the large buffer size then the incoming PDU is placed in the large buffer. If the coming PDU size is greater than a large buffer, then the incoming PDU is placed is more than one large buffer and a pointer to a list of large buffers storing the incoming PDU is placed in a small buffer.
    Type: Grant
    Filed: September 1, 2004
    Date of Patent: April 21, 2009
    Assignee: QLOGIC, Corporation
    Inventors: Derek Rohde, Michael I. Thompson
  • Patent number: 7522624
    Abstract: The present invention relates to a switching unit with a scalable and QoS aware flow control. The actual schedule rate of an egress queue, wherein the outgoing traffic belonging to a particular class of service is backlogged, is measured and compared to its expected schedule rate. If the egress queue is scheduled below expectation, then the bandwidth of every virtual ingress-to-egress pipe connecting an ingress queue, wherein the incoming traffic belonging to the same class of service is backlogged before transmission through the switch core fabric, to that egress queue is increased, thereby feeding that egress queue with more data units.
    Type: Grant
    Filed: October 18, 2004
    Date of Patent: April 21, 2009
    Assignee: Alcatel
    Inventors: Peter Irma August Barri, Bart Joseph Gerard Pauwels, Geert René Taildemand
  • Patent number: 7522625
    Abstract: The present invention provides a method of high speed assemble process capable of dealing with long packets with effective buffer memories usage. A processing method of fragmented packets in packet transfer equipment for transmitting and receiving packet data between terminals through network, includes, receiving fragmented packets, identifying whether the received packet is a packet fragmented into two from original, or a packet fragmented into three or more, for the packet identified as fragmented into two, storing the two fragmented packets into assembly buffer in fragmentation order, on basis of the respective offset values in the packets, and reading out from top, and for the packet fragmented into three or more, chain-connecting the assembly buffers and storing the packets therein in reception order, reading out the packets after deciding the order by comparing chain information and offset values of the fragmented packets within the chain, and then reassembling the packets.
    Type: Grant
    Filed: August 25, 2005
    Date of Patent: April 21, 2009
    Assignee: Fujitsu Limited
    Inventors: Hideo Abe, Kenji Fukuda, Susumu Kojima
  • Publication number: 20090097495
    Abstract: Flexible virtual queues of a switch are allocated to provide non-blocking virtual output queue (VOQ) support. A port ASIC has a set of VOQs, one VOQ per supported port of the switch. For each VOQ, a set of virtual input queues (VIQs) includes a VIQ for each input port of the port ASIC that forms a non-blocking flow with the corresponding output port (and potentially, with the specified level of service) in the switch. The port ASIC selects a VOQ for transmission and then arbitrates among the VIQs of the selected VOQ to select a VIQ from which to transmit the packet. Having identified an appropriate VIQ, the port ASIC transmits cells of the packet at the head of the VIQ to a port ASIC that includes the corresponding output port for reassemble and eventual transmission through the output port.
    Type: Application
    Filed: October 11, 2007
    Publication date: April 16, 2009
    Applicant: BROCADE COMMUNICATIONS SYSTEMS, INC.
    Inventors: Subbarao Palacharla, Michael Corwin
  • Patent number: 7519728
    Abstract: A system improves bandwidth used by a data stream. The system receives data from the data stream and partitions the data into bursts. At least one of the bursts includes one or more idles. The system selectively removes the idles from the at least one burst and transmits the bursts, including the at least one burst.
    Type: Grant
    Filed: July 18, 2002
    Date of Patent: April 14, 2009
    Assignee: Juniper Networks, Inc.
    Inventors: Sharada Yeluri, Kevin Clark, Shahriar Ilislamloo, Chung Lau
  • Publication number: 20090086737
    Abstract: A Queue Manager (QM) system and method are provided for communicating control messages between processors. The method accepts control messages from a source processor addressed to a destination processor. The control messages are loaded in a first-in first-out (FIFO) queue associated with the destination processor. Then, the method serially supplies loaded control messages to the destination processor from the queue. The messages may be accepted from a plurality of source processors addressed to the same destination processor. The control messages are added to the queue in the order in which they are received. In one aspect, a plurality of parallel FIFO queues may be established that are associated with the same destination processor. Then, the method differentiates the control messages into the parallel FIFO queues and supplies control messages from the parallel FIFO queues in an order responsive to criteria such as queue ranking, weighting, or shaping.
    Type: Application
    Filed: September 29, 2007
    Publication date: April 2, 2009
    Inventors: Mark Fairhurst, John Dickey
  • Publication number: 20090086748
    Abstract: A multi-port serial buffer having a plurality of queues is configured to include a first set of queues assigned to store write data associated with a first port, and a second set of queues assigned to store write data associated with a second port. The available queues are user-assignable to either the first set or the second set. Write operations to the first set of queues can be performed in parallel with write operations to the second programmable set of queues. In addition, a first predetermined set of queues is assigned to the first port for read operations, and a second predetermined set of queues is assigned to the second port for read operations. Data can be read from the first predetermined set of queues to the first port at the same time that data is read from the second predetermined set of queues to the second port.
    Type: Application
    Filed: September 27, 2007
    Publication date: April 2, 2009
    Applicant: Integrated Device Technology, Inc.
    Inventors: Chi-Lie Wang, Jason Z. Mo, Mario Au
  • Patent number: 7508786
    Abstract: A mobile communication system and method for exchanging packet data between a host mobile communication terminal and at least one client mobile communication terminal. The host mobile communication terminal sends an SMS message including a list of data to be provided and an IP address for packet data communication. At least one client mobile communication terminal sends a list of data requested to be sent for downloading and authentication information to the host mobile communication terminal over PPP channels set up for the packet data communication. The host mobile communication terminal then sends the data requested by the client mobile communication terminal thereto over the PPP channels. Therefore, data can be exchanged between the host mobile communication terminal and the client mobile communication terminal, without using a separate physical device or means for data exchange.
    Type: Grant
    Filed: August 25, 2004
    Date of Patent: March 24, 2009
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Shin-Hee Do
  • Patent number: 7508837
    Abstract: Systems and methods that provide receive queue provisioning are provided. In one embodiment, a communications system may include, for example, a first queue pair (QP), a second QP and a general pool. The first QP may be associated with a first connection and may include, for example, a first send queue (SQ). The second QP may be associated with a second connection and may include, for example, a second SQ. The general pool may include, for example, a shared receive queue (SRQ) that may be shared, for example, by the first QP and the second QP.
    Type: Grant
    Filed: October 17, 2003
    Date of Patent: March 24, 2009
    Assignee: Broadcom Corporation
    Inventor: Uri Elzur
  • Patent number: 7505410
    Abstract: Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices. The method and apparatus employ queue descriptors to manage transfer of data from corresponding queues in memory into a switch fabric. In one embodiment, each queue descriptor includes an enqueue pointer identifying a tail cell of a segment of data scheduled to be transferred from the queue, a schedule pointer identifying a head cell of the segment of data, and a commit pointer identifying a most recent cell in the segment of data to be successfully transmitted into the switch fabric. In another embodiment, the queue descriptor further includes a scheduler sequence number; and a committed sequence number that are employed in connection with transfers of data from queues containing multiple segments. The various pointers and sequence numbers are employed to facilitate efficient check-point and roll-back operations relating to unsuccessful transmissions into the switch fabric.
    Type: Grant
    Filed: June 30, 2005
    Date of Patent: March 17, 2009
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Sanjeev Jain, Gilbert Wolrich, Hugh Wilkinson