Having Both Input And Output Queuing Patents (Class 370/413)
  • Patent number: 9996468
    Abstract: In a method for managing memory space in a network device two or more respective allocation requests from two or more processing cores among a plurality of processing cores sharing a memory space are received at a memory management device during a first single clock cycle of the memory management device, the two or more allocation requests requesting to allocate, to the two or more processing cores, respective buffers in the shared memory space. In response to receiving the two or more allocation requests, the memory management device allocates to the two or more processing cores, respective two or more buffers in the shared memory space. Additionally, the management device, during a second single clock cycle of the memory management device, transmits respective allocation responses to each of the two or more processing cores, wherein each allocation response includes an indication of a respective allocated buffer.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: June 12, 2018
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Shira Turgeman, Gil Levy
  • Patent number: 9882676
    Abstract: A method and system provides for adapting a computational load in a partly centralized radio access network. A computational load of computational elements of a central processor is measured or estimated, which central processor communicates with a plurality of radio access points and user equipment in the partly centralized radio access network. It is determined, by the central processor, whether the computational load should be reduced or increased. The computational load is adjusted by changing at least one of a modulation and coding scheme of the user equipment or an uplink power.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: January 30, 2018
    Assignee: NEC Corporation
    Inventor: Peter Rost
  • Patent number: 9846658
    Abstract: In one embodiment, packet memory and resource memory of a memory are independently managed, with regions of packet memory being freed of packets and temporarily made available to resource memory. In one embodiment, packet memory regions are dynamically made available to resource memory so that in-service system upgrade (ISSU) of a packet switching device can be performed without having to statically allocate (as per prior systems) twice the memory space required by resource memory during normal packet processing operations. One embodiment dynamically collects fragments of packet memory stored in packet memory to form a contiguous region of memory that can be used by resource memory in a memory system that is shared between many clients in a routing complex. One embodiment assigns a contiguous region no longer used by packet memory to resource memory, and from resource memory to packet memory, dynamically without packet loss or pause.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: December 19, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Mohammed Ismael Tatar, Promode Nedungadi, Naader Hasani, John C. Carney
  • Patent number: 9807027
    Abstract: A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: October 31, 2017
    Assignee: Marvell Isreal (M.I.S.L.) Ltd.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein, Aron Wohlgemuth
  • Patent number: 9742548
    Abstract: A method and an apparatus for using a serial port device in a time division multiplexing manner are provided. The apparatus includes a first serial port, a second serial port, a switching circuit, and a signal interface, where the switching circuit selects to receive data sent; the first serial port sends first data to a first serial port device; the second serial port receives second data sent by a second serial port device, and when it is determined that the second data indicates that the second serial port device needs to receive third data sent by the second serial port, instructs the switching circuit to select to receive the third data sent; and the second serial port sends the third data to the second serial port device. Therefore, the first serial port and the second serial port can use corresponding serial port devices in a time division multiplexing manner.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: August 22, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Lijuan Tan
  • Patent number: 9608926
    Abstract: A method for managing recirculation path traffic in a network node comprises monitoring an input packet stream received at an input port of the network node and monitoring a recirculation packet stream at a recirculation path of the network node. A priority level associated with individual packets of the monitored input packet stream is detected and low priority packets are stored in a virtual queue. The method also includes determining an average packet length associated with packets of the monitored recirculation packet stream. The method further comprises queuing one or more of the low priority packets or the recirculation packets for transmission based on the average packet length and a weighted share schedule.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: March 28, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Hiroshi Suzuki, Surendra Anubolu, Andrew Michael Robbins, Stephen Francis Scheid
  • Patent number: 9596192
    Abstract: Embodiments relate to transmission of control data between a network switch and a switch controller is provided. Aspects of the embodiments includes: configuring a plurality of control data packets by the switch controller, wherein configuring includes disposing a sequence number in each of the plurality of control data packets indicating an order of data packet transmission; storing the plurality of control data packets in a replay buffer in communication with the switch controller; transmitting the plurality of control data packets to the network switch over a secure link between the switch controller and the network switch; and responsive to determining that one or more control data packets were not received by the network switch, retrieving the one or more control data packets from the replay buffer and re-transmitting the one or more control data packets to the network switch.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Casimer M. DeCusatis, Rajaram B. Krishnamurthy
  • Patent number: 9590923
    Abstract: A method for transmission of control data between a network switch and a switch controller is provided. The method includes: configuring a plurality of control data packets by the switch controller, wherein configuring includes disposing a sequence number in each of the plurality of control data packets indicating an order of data packet transmission; storing the plurality of control data packets in a replay buffer in communication with the switch controller; transmitting the plurality of control data packets to the network switch over a secure link between the switch controller and the network switch; and responsive to determining that one or more control data packets were not received by the network switch, retrieving the one or more control data packets from the replay buffer and re-transmitting the one or more control data packets to the network switch.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: March 7, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Casimer M. DeCusatis, Rajaram B. Krishnamurthy
  • Patent number: 9535861
    Abstract: A device includes a routing buffer. The routing buffer includes a first port configured to receive a signal relating to an analysis of at least a portion of a data stream. The routing buffer also includes a second port configured to selectively provide the signal to a first routing line of a block of a state machine at a first time. The routing buffer further includes a third port configured to selectively provide the signal to a second routing line of the block of the state machine at the first time.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: January 3, 2017
    Assignee: Micron Technology, Inc.
    Inventors: David R. Brown, Harold B Noyes, Irene Junjuan Xu, Paul Glendenning
  • Patent number: 9460007
    Abstract: An apparatus relates generally to time sharing of an arithmetic unit. In such an apparatus, a controller is coupled to provide read pointers and write pointers. A memory block is coupled to receive the read pointers and the write pointers. A selection network is coupled to the memory block and the arithmetic unit. The memory block includes a write-data network, a read-data network, and memory banks.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: October 4, 2016
    Assignee: XILINX, INC.
    Inventors: Ephrem C. Wu, Xiaoqian Zhang
  • Patent number: 9374387
    Abstract: Aspects of the present invention provide a device, method and system which utilize hardware-based granular evaluation of industrial control protocol packets to withstand traffic storms. In an embodiment, packet evaluation circuitry coupled to a port may be adapted to evaluate one or more protocol fields contained in each inbound packet before switching circuitry can send the inbound packet to the proper destination. The inbound packet may be sent by the switching circuitry if it is a particular message, or may be selectively inhibited from being sent by the switching circuitry if the inbound packet does not contain the particular message for being sent and if the total number of bytes of the inbound packet type exceeds a threshold for the outbound port during a given period of time. As such, critical industrial applications may continue to operate in the presence of a traffic storm.
    Type: Grant
    Filed: October 12, 2012
    Date of Patent: June 21, 2016
    Assignee: Rockwell Automation Technologies, Inc.
    Inventors: Brian A. Batke, Sivaram Balasubramanian
  • Patent number: 9369397
    Abstract: A device may store a credit value for each of multiple output components. The device may receive packets from a network device via an input component. The device may cause the input component to queue the packets. The device may selectively dequeue a packet from the input component, to be sent to an output component, based on whether the credit value for the output component satisfies a credit threshold. The device may send the packet to the output component based on a destination of the packet when the packet is dequeued from the input component. The device may determine a size of the packet after the packet is dequeued. The device may update the credit value for the output component based on the size of the packet. The device may output the packet to another network device via the output component.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: June 14, 2016
    Assignee: Juniper Networks, Inc.
    Inventors: Ravi Pathakota, Sarin Thomas, Sudipta Kundu, Srihari R. Vegesna, Firdaus Mahiar Irani, Kalpataru Maji, Naveen K. Jain
  • Patent number: 9338100
    Abstract: A method and apparatus aggregate a plurality of input data streams from first processors into one data stream for a second processor, the circuit and the first and second processors being provided on an electronic circuit substrate. The aggregation circuit includes (a) a plurality of ingress data ports, each ingress data port adapted to receive an input data stream from a corresponding first processor, each input data stream formed of ingress data packets, each ingress data packet including priority factors coded therein, (b) an aggregation module coupled to the ingress data ports, adapted to analyze and combine the plurality of input data steams into one aggregated data stream in response to the priority factors, (c) a memory coupled to the aggregation module, adapted to store analyzed data packets, and (d) an output data port coupled to the aggregation module, adapted to output the aggregated data stream to the second processor.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: May 10, 2016
    Assignee: Foundry Networks, LLC
    Inventors: Yuen Fai Wong, Yu-Mei Lin, Richard A. Grenier
  • Patent number: 9275290
    Abstract: A device includes a routing buffer. The routing buffer includes a first port configured to receive a signal relating to an analysis of at least a portion of a data stream. The routing buffer also includes a second port configured to selectively provide the signal to a first routing line of a block of a state machine at a first time. The routing buffer further includes a third port configured to selectively provide the signal to a second routing line of the block of the state machine at the first time.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: March 1, 2016
    Assignee: Micron Technology, Inc.
    Inventors: David R. Brown, Harold B Noyes, Irene Junjuan Xu, Paul Glendenning
  • Patent number: 9166914
    Abstract: A mechanism is provided in a data processing system for shared buffer affinity for multiple ports. The mechanism configures a physical first-in-first-out (FIFO) buffer with a plurality of FIFO segments associated with a plurality of network ports. The plurality of network ports share the physical FIFO buffer. The mechanism identifies a FIFO segment under stress within the plurality of FIFO segments. The mechanism reconfigures the physical FIFO buffer to assign a portion of buffer space from a FIFO segment not under stress within the plurality of FIFO segments to the FIFO segment under stress.
    Type: Grant
    Filed: December 9, 2013
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Omar Cardona, Andres Herrera, Pedro V. Torres, Rafael Velez
  • Patent number: 9166878
    Abstract: In one embodiment, an apparatus includes a network management module configured to execute at a network device operatively coupled to a switch fabric. The switch fabric may have a distributed control plane. The network management module is configured to receive a request regarding status information for a certain set of network resources identified with a virtual or logical identifier. The network management module is configured to generate and send a corresponding query for status information to a set of physical elements that encompass and may be larger than the certain set of network resources and collect responses to that query. The network management module is configured to construct a response to the request from the status information in the collected responses to the query. The constructed response includes only information related to the original request.
    Type: Grant
    Filed: December 23, 2010
    Date of Patent: October 20, 2015
    Assignee: Juniper Networks, Inc.
    Inventors: Dana Cook, Chris Cole, David Nedde
  • Patent number: 9160699
    Abstract: A method of distributing messages from a server system to a plurality of client systems comprises defining a quality of service (QoS) level for messages provided by the messaging system to the client system, defining a message processing capacity provided by a client to the messaging system, and degrading the QoS level of messages in the event that the client system does not provide the defined message processing capacity to the messaging system.
    Type: Grant
    Filed: June 2, 2006
    Date of Patent: October 13, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Ian Gerald Craggs
  • Patent number: 9117037
    Abstract: An interface apparatus, a cascading system thereof, and a cascading method thereof are provided. The cascading system includes a host, a first-type interface apparatus, and a second-type interface apparatus which are serially connected. The host provides data transmission of a first and a second channel by a first controller through a first interface port. In the first-type interface apparatus, data of the first channel is transmitted to a second controller through a second interface port and then to a third interface port, and data of the second channel is directly transmitted to the third interface port through the second interface port. In the second-type interface apparatus, the data of the second channel are transmitted to a third controller through a forth interface port and then to the fifth interface port, and the data of the first channel is directly transmitted to the fifth interface port through the forth interface port.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: August 25, 2015
    Assignee: Acer Incorporated
    Inventor: Sip Kim Yeung
  • Patent number: 9083655
    Abstract: Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include internal cut-through. The internal cut-through may bypass input port buffers by directly forwarding packet data that has been received to an output port. At the output port, the packet data is buffered for processing and communication out of the switch.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: July 14, 2015
    Assignee: Broadcom Corporation
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 9042397
    Abstract: An apparatus comprising a chip comprising a plurality of nodes, wherein a first node from among the plurality of nodes is configured to receive a first flit comprising a first timestamp, receive a second flit comprising a second timestamp, determine whether the first flit is older than the second flit based on the first timestamp and the second timestamp, transmit the first flit before the second flit if the first flit is older than the second flit, and transmit the second flit before the first flit if the first flit is not older than the second flit.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: May 26, 2015
    Assignee: Futurewei Technologies, Inc.
    Inventors: Rohit Sunkam Ramanujam, Sailesh Kumar, William Lynch
  • Patent number: 9030936
    Abstract: Methods and apparatus for implementing flow control with reduced buffer usage for network devices. In response to detection of flow control events, transmission of a data unit or segment such as an Ethernet frame is preempted in favor of a flow control message, resulting in aborting transmission of the frame. Data corresponding to the entirety of the frame is buffered at the transmitting station until the frame has been transmitted (or after a delay), enabling retransmission of the aborted frame. Preemption of frames in favor of flow control messages results in earlier responses to flow control events, enabling the size of buffers to be reduced.
    Type: Grant
    Filed: June 12, 2013
    Date of Patent: May 12, 2015
    Assignee: Intel Corporation
    Inventors: Ben-zion Friedman, Eliezer Tamir
  • Patent number: 9031087
    Abstract: A system for optimizing response time to events or representations thereof waiting in a queue has a first server having access to the queue; a software application running on the first server; and a second server accessible from the first server, the second server containing rules governing the optimization. In a preferred embodiment, the software application at least periodically accesses the queue and parses certain ones of events or tokens in the queue and compares the parsed results against rules accessed from the second server in order to determine a measure of disposal time for each parsed event wherein if the determined measure is sufficiently low for one or more of the parsed events, those one or more events are modified to a reflect a higher priority state than originally assigned enabling faster treatment of those events resulting in relief from those events to the queue system load.
    Type: Grant
    Filed: April 19, 2011
    Date of Patent: May 12, 2015
    Assignee: Genesys Telecommunications Laboratories, Inc.
    Inventor: Yevgeniy Petrovykh
  • Patent number: 9025615
    Abstract: Service providers can reduce multiple overlay networks by creating multiple logical service networks (LSNs) on the same physical or optical fiber network through use of an embodiment of the invention. The LSNs are established by the service provider and can be characterized by traffic type, bandwidth, delay, hop count, guaranteed information rates, and/or restoration priorities. Once established, the LSNs allow the service provider to deliver a variety of services to customers depending on a variety of factors, for example, a customer's traffic specifications. Different traffic specifications are serviced on different LSNs depending on each LSN's characteristics. Such LSNs, once built within a broadband network, can be customized and have its services sold to multiple customers.
    Type: Grant
    Filed: September 10, 2013
    Date of Patent: May 5, 2015
    Assignee: Tellabs Operations, Inc.
    Inventors: Michael Kazban, Mitri Halabi, Ken Koenig, Vinai Sirkay
  • Publication number: 20150110126
    Abstract: An electronic device communicates according to a network protocol that defines data packets, for example EtherCAT. The device has a processor for performing input control on incoming data packets and performing output control on outgoing data packets, and a shared FIFO buffer comprising a multiuser memory. An input unit receives input data, detects the start of a respective data packet, subdivides the data packet into consecutive segments, one segment having a predetermined number of data bytes, and transfers the segment to the FIFO buffer before the next segment has been completely received. The processor accesses, in the input control, the multiuser memory for processing the segment, and, in the output control, initiates outputting the output packet before the corresponding input data packet has been completely received. An output unit transfers the segment from the FIFO buffer, and transmits the segment to the communication medium.
    Type: Application
    Filed: July 3, 2012
    Publication date: April 23, 2015
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Graham Edmiston, Hezi Rahamim, Amir Yosha
  • Patent number: 8995455
    Abstract: One method includes: (a) providing a memory storage device having a plurality of storage locations for storing information received by a plurality of sub-ports of a base port of the network device, where the memory storage device is shared among the plurality of sub-ports such that each sub-port is given access to the memory storage device at a certain phase of a system clock cycle; (b) storing a packet or a portion thereof at one of the storage locations when a sub-port that receives the packet has access to one or more of the storage locations; and (c) scrambling addresses for the memory storage locations such that a different one of the storage location is available to the sub-port of step (b) for a next write operation in a next phase when the sub-port of step (b) is given access to the memory storage device.
    Type: Grant
    Filed: November 15, 2012
    Date of Patent: March 31, 2015
    Assignee: QLOGIC, Corporation
    Inventors: Frank R. Dropps, Craig M. Verba
  • Patent number: 8997185
    Abstract: An encryption sentinel system and method protects sensitive data stored on a storage device and includes sentinel software that runs on a client machine, sentinel software that runs on a server machine, and a data storage device. When a client machine requests sensitive data from the data storage device, the data storage device interrogates the sentinel software on the server machine to determine if this client machine has previously been deemed to have proper encryption procedures and authentication. If the sentinel server software has this information stored, it provides an approval or denial to the storage device that releases the data if appropriate. If the sentinel server software does not have this information at hand or the previous information is too old, the sentinel server interrogates the sentinel software that resides on the client machine which scans the client machine and provides an encryption update to the sentinel server software, following which data will be released if appropriate.
    Type: Grant
    Filed: November 27, 2012
    Date of Patent: March 31, 2015
    Inventor: Bruce R. Backa
  • Patent number: 8995459
    Abstract: A communication system detects particular application protocols in response to their message traffic patterns, which might be responsive to packet size, average packet rate, burstiness of packet transmissions, or other message pattern features. Selected message pattern features include average packet rate, maximum packet burst, maximum future accumulation, minimum packet size, and maximum packet size. The system maintains a counter of packet tokens, each arriving at a constant rate, and maintains a queue of real packets. Each real packet is released from the queue when there is a corresponding packet token also available for release. Packet tokens overfilling the counter, and real packets overfilling the queue, are discarded. Users might add or alter application protocol descriptions to account for profiles thereof.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: March 31, 2015
    Assignee: Meru Networks
    Inventors: Vaduvur Bharghavan, Shishir Varma, Sung-Wook Han
  • Patent number: 8976803
    Abstract: Embodiments of the invention are directed to monitoring resources of a network processor to detect a condition of exhaustion in one or more of the resources over a predetermined time interval and to provide an indication of the condition. Some embodiments periodically sample various resources of a network processor and from the samples calculate utilization of the network processor's memory bus and core processor, and determine if an interworking FIFO packet queue error has occurred. Such information may help network operators and/or support engineers to quickly zero in on the root cause and take corrective actions for network failures which previously could have been attributed to many different causes and that would have required significant time and effort to troubleshoot.
    Type: Grant
    Filed: April 11, 2013
    Date of Patent: March 10, 2015
    Assignee: Alcatel Lucent
    Inventors: Toby J. Koktan, William R. McEachern
  • Patent number: 8976802
    Abstract: An arbitration technique for determining mappings for a switch is described. During a given arbitration decision cycle, an arbitration mechanism maintains, until expiration, a set of mappings from a subset of the input ports to a subset of the output ports of the switch. This set of mappings was determined during an arbitration decision cycle up to K cycles preceding the given arbitration decision cycle. Because the set of mappings are maintained, it is easier for the arbitration mechanism to determine mappings from a remainder of the input ports to the remainder of the output ports without collisions.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: March 10, 2015
    Assignee: Oracle International Corporation
    Inventors: Pranay Koka, Herbert D. Schwetman, Jr., Syed Ali Raza Jafri
  • Patent number: 8971329
    Abstract: A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: March 3, 2015
    Assignee: Silicon Graphics International Corp.
    Inventors: Randal G. Martin, Steven C. Miller, Mark D. Stadler, David A. Kruckemyer
  • Patent number: 8971346
    Abstract: A data collection system for, and methods of, providing reliable store-and-forward data handling by encoded information reading terminals can utilize ad-hoc peer-to-peer (i.e., terminal-to-terminal) connections in order to store data that is normally stored on a single terminal only, in a redundant manner on two or more terminals. Each portable encoded information reading terminal can be configured so that when it captures data, a software application causes the terminal to search out nearby peer terminals that can store and/or forward the data to other peer terminals or to a data collection server, resulting in the data having been stored by one or more peer terminals that are immediately or not immediately accessible by the data-originating terminal.
    Type: Grant
    Filed: April 30, 2007
    Date of Patent: March 3, 2015
    Assignee: Hand Held Products, Inc.
    Inventor: Mitchel P. Sevier
  • Patent number: 8964760
    Abstract: A network switch transfers data, which are to be transferred between nodes, in a time-division multiplex manner after allocating the data to slots, which are created by dividing a unit of time into a plurality of sections. An input unit includes a selection unit that selects a buffer unit according to an input slot in order to transfer the data input from the input port to the buffer unit, an input slot correspondence management table that stores a correspondence relationship between the input slots and the buffer units, and input port management information used to control a communication bandwidth of the input port.
    Type: Grant
    Filed: March 8, 2010
    Date of Patent: February 24, 2015
    Assignee: NEC Corporation
    Inventor: Nobuki Kajihara
  • Patent number: 8953631
    Abstract: An embodiment may include circuitry to permit interruption, at least in part, of a first frame from a sender to an intended recipient in favor of transmitting, at least in part, a payload of a second frame from the sender to the intended recipient, and/or processing, at least in part, one or more incoming flow control notifications. The payload may be transmitted, at least in part, to the intended recipient in one or more frame fragments. Many modifications, variations, and alternatives are possible without departing from this embodiment.
    Type: Grant
    Filed: June 30, 2010
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Ygdal Naouri, Eliel Louzoun
  • Patent number: 8937964
    Abstract: Packets having at least one cell are switched using input queues, output queues, a switch fabric, and a controller. Each input queue stores cells to be switched, and each output queue stores switched cells. The switch fabric couples the input queues to the output queues and has memory. The switch fabric stores cells moved from the input queues to the switch fabric and stores cells based on the output queues. The controller couples to the input queues and the switch fabric and determines input priorities for cells moving from the input queues to the switch fabric and output priorities for cells moving from the switch fabric to the output queues.
    Type: Grant
    Filed: June 27, 2003
    Date of Patent: January 20, 2015
    Assignee: Tellabs Operations, Inc.
    Inventors: Robert B. Magill, Kenneth P. Laberteaux
  • Patent number: 8914560
    Abstract: An IOP 14 includes a path-state determining unit 54 and a path selecting unit 55. The path-state determining unit 54 determines whether there is any path which is neither in process of data transmission nor in a prohibition period in which data transmission is prohibited for a predetermined time since the last data transmission has been completed out of multiple paths connecting a device to a communication partner device. When the path-state determining unit 54 determines that there is no path which is neither in process of data transmission nor in the prohibition period, the path selecting unit 55 selects a path which completes data transmission but does not pass through the prohibition period as a path for data transmission.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: December 16, 2014
    Assignee: Fujitsu Limited
    Inventor: Tadasuke Katoh
  • Patent number: 8908709
    Abstract: In one embodiment, a method includes receiving a request to transmit data from a first queue to a second queue via a switch fabric. In response to the receiving, a wake-up signal configured to trigger a stage of a processing pipeline in communication with the second queue to change from a standby state to an active state is sent.
    Type: Grant
    Filed: January 8, 2009
    Date of Patent: December 9, 2014
    Assignee: Juniper Networks, Inc.
    Inventor: Gunes Aybay
  • Patent number: 8908711
    Abstract: Techniques for using target issue intervals are provided. Request messages may identify the size of a data packet. A target issue interval may be determined based on the request messages. The target issue interval may be used to insert a delay between sending subsequent request messages.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: December 9, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Michael L Ziegler
  • Patent number: 8885472
    Abstract: The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: November 11, 2014
    Assignee: Broadcom Corporation
    Inventors: Bruce Kwan, Brad Matthews, Puneet Agarwal
  • Patent number: 8885657
    Abstract: Back pressure is mapped within a network, and primary bottlenecks are distinguished from dependent bottlenecks. Further, the presently disclosed technology is capable of performing network healing operations designed to reduce the data load on primary bottlenecks while ignoring dependent bottlenecks. Still further, the presently disclosed technology teaches identifying and/or suggesting a switch port for adding a node to the network. More specifically, various implementations analyze traffic load and back pressure in a network, identify primary and dependent bottlenecks, resolve the primary bottlenecks, collect new node parameters, and/or select a switch port for the new node. Further, a command can be sent to a selected switch to activate an indicator on the selected port. New node parameters may include new node type, maximum load, minimum load, time of maximum load, time of minimum load and type of data associated with the new node.
    Type: Grant
    Filed: November 6, 2009
    Date of Patent: November 11, 2014
    Assignee: Brocade Communications Systems, Inc.
    Inventors: Michael Atkinson, Vineet Abraham, Sathish Gnanasekaran, Rishi Sinha
  • Patent number: 8879578
    Abstract: Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include speculative flow status messaging, for example. The speculative flow status messaging may alert an egress tile or output port of an incoming packet before the incoming packet is fully received. The processing techniques may also include implementing a separate accelerated credit pool which provides controlled push capability for the ingress tile or input port to send packets to the egress tile or output port without waiting for a bandwidth credit from the egress tile or output port.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: November 4, 2014
    Assignee: Broadcom Corporation
    Inventors: Brad Matthews, Puneet Agarwal, Bruce Kwan
  • Patent number: 8879571
    Abstract: Techniques for delays based on packet sizes are provided. Request messages may identify the size of a data packet. Delays may be initiated based in part on a portion of the size of the data packet. The delays may also be based in part on target issue intervals. Request messages may be sent after the delays.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: November 4, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Michael L. Ziegler
  • Publication number: 20140307746
    Abstract: A network device such as a router or switch, in one embodiment, includes a timing analyzer which is capable of providing timing analysis over one or more network circuits. The timing analyzer, in one aspect, receives a data packet traveling across a circuit emulation service (“CES”) circuit such as T1 or E1 circuit. Upon obtaining an arrival timestamp associated with the data packet, the arrival timestamp is stored in a timestamp buffer in accordance with a first-in first-out (“FIFO”) storage sequence. After identifying the oldest arrival timestamp in the timestamp buffer, an offset is generated based on the result of comparison between the arrival timestamp and the oldest timestamp. The timing analyzer can also be configured to generate timing reports on-demand based on generated offset(s).
    Type: Application
    Filed: April 11, 2013
    Publication date: October 16, 2014
    Applicant: Tellabs Operations, Inc.
    Inventors: Anthony Leonard Sasak, Christopher V. O'Brien
  • Patent number: 8861539
    Abstract: Multicast traffic is expected to increase in packet networks, and therefore in switches and routers, by including broadcast and multimedia-on-demand services. Combined input-crosspoint buffered (CICB) switches can provide high performance under uniform multicast traffic. However this is often at the expense of N2 crosspoint buffers. An output-based shared-memory crosspoint-buffered (O-SMCB) packet switch is used where the crosspoint buffers are shared by two outputs and use no speedup. An embodiment of the proposed switch provides high performance under admissible uniform and non-uniform multicast traffic models while using 50% of the memory used in CICB switches that has dedicated buffers. Furthermore, the O-SMCB switch provides higher throughput than an existing SMCB switch where the buffers are shared by inputs.
    Type: Grant
    Filed: August 29, 2008
    Date of Patent: October 14, 2014
    Assignee: New Jersey Institute of Technology
    Inventors: Ziqian Dong, Roberto Rojas-Cessa
  • Patent number: 8861515
    Abstract: Generally, a method and apparatus are disclosed that store sequential data units of a data packet received at an input port in contiguous banks of a buffer in a shared memory, thereby obviating any need for storing linkage information between data units. Data packets can extend through multiple buffers (next-buffer linkage information is much more efficient than next-data-unit linkage information). According to another aspect of the invention, buffer memory utilization can be further enhanced by storing multiple packets in a single buffer. For each buffer, a buffer usage count is stored that indicates the sum (over all packets represented in the buffer) of the number of output ports toward which each of the packets is destined.
    Type: Grant
    Filed: April 21, 2004
    Date of Patent: October 14, 2014
    Assignee: Agere Systems LLC
    Inventors: Chung Kuang Chin, Yaw Fann, Roy T. Myers, Jr.
  • Patent number: 8855129
    Abstract: A method for transmitting packets, the method includes receiving multiple packets at multiple queues. The method is characterized by dynamically defining fixed priority queues and weighted fair queuing queues, and scheduling a transmission of packets in response to a status of the multiple queues and in response to the definition. A device for transmitting packets, the device includes multiple queues adapted to receive multiple packets. The device includes a circuit that is adapted to dynamically define fixed priority queues and weighted fair queuing queues out of the multiple queues and to schedule a transmission of packets in response to a status of the multiple queues and in response to the definition.
    Type: Grant
    Filed: June 7, 2005
    Date of Patent: October 7, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Boaz Shahar, Freddy Gabbay, Eyal Soha
  • Patent number: 8848587
    Abstract: Multicasting network packets is disclosed. A total number of copies of a frame, t, to be sent is determined. A number of copies of the frame, m, which is less than a total number of copies of the frame, t, to be made during a current iteration is determined. M copies of the frame are made. The m copies of the frame are then sent to their destinations. The original input frame is provided as output with an indication that the frame should be returned for further processing. Processing of the frame is discontinued during an interval in which other frames are processed. The process is repeated until t copies have been sent.
    Type: Grant
    Filed: April 23, 2004
    Date of Patent: September 30, 2014
    Assignee: Alcatel Lucent
    Inventors: Mark A. L. Smallwood, Michael J. Clarke, Mark A. French, Martin R. Lea
  • Publication number: 20140269752
    Abstract: A method for performing aggregation at one or more layers starts with an AP placing at a first layer one or more received frames in a queue at the AP. When a transmit scheduler is ready to transmit an aggregated frame corresponding to the queue, the AP may iteratively select a plurality of frames selected from the one or more received frames, and aggregate at the first layer the plurality of frames into the aggregated frame. The number of frames included in an aggregated frame may be based on at least one of: a dynamically updated rate of transmission associated with a size of the frames, a class of the frames, a transmission opportunity value associated with the class of the frames and a total projected airtime for transmitting the aggregated frame. Other embodiments are also described.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: Aruba Networks, Inc.
    Inventors: Gautam Bhanage, Sathish Damodaran
  • Publication number: 20140269750
    Abstract: A system and method are disclosed for assigning incoming packets to receive queues of a virtual machine. In accordance with one embodiment, a hypervisor that is executed by a computer system receives a request from a virtual machine to transmit an outgoing packet to a destination, and an identification of a receive queue of a plurality of receive queues of the virtual machine, where the identification of the receive queue is provided to the hypervisor by the virtual machine along with the request. The hypervisor obtains a flow identifier from a header of the outgoing packet that identifies a flow associated with the outgoing packet, and the outgoing packet is transmitted to the destination. The computer system then receives an incoming packet whose header specifies the flow identifier, and the hypervisor inserts the incoming packet into the receive queue using the identification of the receive queue.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: RED HAT ISRAEL, LTD.
    Inventor: Michael Tsirkin
  • Publication number: 20140269751
    Abstract: An arbitration technique for determining mappings for a switch is described. During a given arbitration decision cycle, an arbitration mechanism maintains, until expiration, a set of mappings from a subset of the input ports to a subset of the output ports of the switch. This set of mappings was determined during an arbitration decision cycle up to K cycles preceding the given arbitration decision cycle. Because the set of mappings are maintained, it is easier for the arbitration mechanism to determine mappings from a remainder of the input ports to the remainder of the output ports without collisions.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Pranay Koka, Herbert D. Schwetman, JR., Syed Ali Raza Jafri
  • Patent number: 8837503
    Abstract: Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: September 16, 2014
    Assignee: Unbound Networks, Inc.
    Inventors: Damon Finney, Ashok Mathur