Having Output Queuing Only Patents (Class 370/417)
  • Patent number: 11722418
    Abstract: A network configuration method includes determining an end-to-end latency upper bound of data traffic between two end nodes, determining an end-to-end latency constraint of the data traffic between the two end nodes, determining, based on the end-to-end latency upper bound and the end-to-end latency constraint, for a first network shaper, at least one configuration parameter that satisfies the end-to-end latency constraint, and configuring the first network shaper for the data traffic based on the at least one configuration parameter such that the traffic after being shaped by the shaper satisfies the network latency constraint.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: August 8, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jiayi Zhang, Tongtong Wang, Xinyuan Wang
  • Patent number: 11637786
    Abstract: When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: April 25, 2023
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan, Ajit Kumar Jain
  • Patent number: 11502941
    Abstract: A router in a switching network has input interfaces communicatively coupled to other routers and first and second output interfaces communicatively coupled to other routers in the switching network. First and second output interface queues, respectively associated with the first and second output interfaces, store data packets awaiting transmission respectively on the first and second output interfaces. A routing table maps first and second destination addresses to the first output interface as a primary interface and maps the first destination address to the second output interface as an alternate interface. A route controller assigns, using the routing table, data packets having one of the first or second destination address to the primary output interface for transmission, the primary output interface being the first output interface.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: November 15, 2022
    Assignee: T-Mobile USA, Inc.
    Inventor: Cameron Byrne
  • Patent number: 11115233
    Abstract: A vehicle includes an image generation controller configured to generate a plurality of image frames, to assign a first MAC address to a first image frame of the plurality of image frames, and to assign a second MAC address to a second image frame of the plurality of image frames; an Ethernet switch including a plurality of Ethernet ports, configured to transmit the first image frame to a first image receiving controller and a second image receiving controller based on the first MAC address, and to transmit the second image frame to the second image receiving controller based on the second MAC address; the first image receiving controller configured to receive the first image frame, and to image process the first image frame; and the second image receiving controller configured to receive the first and second image frame, and to process the first and second image frame.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: September 7, 2021
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Taehwan Park, ChoongSeob Park
  • Patent number: 10523576
    Abstract: Efficient garbage collection techniques for network packets and other units of data are described. Constituent portions of a data unit are stored in buffer entries spread out across multiple distinct banks. Linking data is generated and stored on a per-bank basis. The linking data defines, for each bank in which data for the data unit is stored, a chain of all entries in that bank that store data for the data unit. When the data unit is dropped or otherwise disposed of, a chain's head entry address may be placed in a garbage collection list for the corresponding bank. A garbage collector uses the linking data to gradually follow the chain of entries for the given bank, and frees each entry in the chain along the way. Optionally, certain addresses in the chain, including each chain's tail address, are immediately freed for the corresponding bank, without waiting to follow the chain.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: December 31, 2019
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Ajit Kumar Jain
  • Patent number: 10389615
    Abstract: In one embodiment, enhanced packet flow monitoring is performed by packet switching devices in a network. A packet switching device is configured to monitor a flow of packets passing through the packet switching device, including detecting a gap in consecutive packets of the flow of packets, and attributing the gap as not being dropped one or more packets based on a particular time duration between a last received packet of the flow of packets before said detected gap and a first received packet of the flow of packets after said detected gap. In one embodiment, the gap is attributed to not being dropped packets when the particular time duration is greater than a threshold value; and conversely, attributed to being dropped packets when the particular time duration is less than a same or different threshold value.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: August 20, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Tony Changhong Shen, Yu Zhang, Alan Xiao-Rong Wang, Aviv Prital, Doron Oz, Kathy Xia Ke
  • Patent number: 10353824
    Abstract: A method for processing data, a memory management unit and a memory control device are disclosed. In an embodiment, the method includes receiving, by a memory control device, a first data packet carrying a virtual address and a page table base address that are sent by a memory management unit, executing, by the memory control device, a page table walk operation according to the virtual address and the page table base address, to obtain a physical address corresponding to the virtual address and sending, by the memory control device, the physical address corresponding to the virtual address to the memory management unit.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: July 16, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yun Chen, Guangfei Zhang, Kunpeng Song
  • Patent number: 10291680
    Abstract: The streaming media encoding and routing system employs an encoder circuit that constructs a streaming media sequence as a plurality of sequential frames, each frame comprising a plurality of segments. The encoder circuit has a processor that is programmed to place media information into the plurality of segments of each frame according to a predefined priority and further programmed to order the sequence of said segments within each frame such that the segments are progressively priority-ranked from high priority to low priority to define an EPIQ-encoded packet. A processor tests an incoming packet to determine whether it is EPIQ-encoded.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: May 14, 2019
    Assignee: Board of Trustees of Michigan State University
    Inventor: Hayder Radha
  • Patent number: 10021007
    Abstract: Presented herein are techniques to measure latency associated with packets that are processed within a network device. A packet is received at a component of a network device comprising one or more components. A timestamp representing a time of arrival of the packet at a first point in the network device is associated with the packet. The timestamp is generated with respect to a clock of the network device. A latency value for the packet is computed based on at least one of the timestamp and current time of arrival at a second point in the network device. One or more latency statistics are updated based on the latency value.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: July 10, 2018
    Assignee: Cisco Technology, Inc.
    Inventors: Thomas J. Edsall, Wei-Jen Huang, Chih-Tsung Huang, Kelvin Chan
  • Patent number: 9691483
    Abstract: In one aspect, techniques for providing a banked content addressable memory (CAM) with counters are provided. A dictionary word may be divided into a plurality of banks. A counter may be associated with each bank of the plurality of banks. The counter may count the number of times a segment of an input word aligned with the bank does not match. A scheduler may schedule comparison of banks with higher probability of not matching before banks with lower probability of not matching. The probability of not matching may be based on the counters.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: June 27, 2017
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Brent Buchanan, John Paul Strachan, Le Zheng
  • Patent number: 9461968
    Abstract: Techniques are provided for implementing a zone-based firewall policy. At a virtual network device, information is defined and stored that represents a security management zone for a virtual firewall policy comprising one or more common attributes of applications associated with the security zone. Information representing a firewall rule for the security zone is defined and comprises first conditions for matching common attributes of applications associated with the security zone and an action to be performed on application traffic. Parameters associated with the application traffic are received that are associated with properly provisioned virtual machines. A determination is made whether the application traffic parameters satisfy the conditions of the firewall rule and in response to determining that the conditions are satisfied, the action is performed.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: October 4, 2016
    Assignee: Cisco Technology, Inc.
    Inventors: David Chang, Abhijit Patra, Nagaraj Bagepalli, Rajesh Kumar Sethuraghavan
  • Patent number: 9436432
    Abstract: A memory may require a buffering mechanism in which data can be written and read at the same time. This requires a multi-port FIFO memory, which has multiple ports, thus providing simultaneous read & write operations. Multi-port memories have a large penalty on area. Hence, a technique is proposed for avoiding use of multi-port memories for designs which requires sequential read and write operations. In this technique multiple single-port memories are used to form a multi-port memory. This memory requires additional control logic but consumes significantly lower silicon area.
    Type: Grant
    Filed: December 29, 2006
    Date of Patent: September 6, 2016
    Assignee: STMICROELECTRONICS INTERNATIONAL N.V.
    Inventor: Kapil Batra
  • Publication number: 20150124836
    Abstract: A communication device includes: a plurality of output ports; a plurality of queues in which packets are stored so as to be sorted into groups of packets that are output from an identical output port in an identical time period, from among the plurality of output ports; a plurality of first selectors that respectively corresponds to the plurality of output ports, and each of which switches a queue from which packets that are output from the output port are read, between the plurality of queues each time the time period elapses; and a second selector that switches a first selector from which packets are output, between the plurality of first selectors, at time intervals in accordance with output rates of packets of the plurality of output ports.
    Type: Application
    Filed: October 16, 2014
    Publication date: May 7, 2015
    Inventor: Kazuto NISHIMURA
  • Patent number: 8995456
    Abstract: A Clos-network packet switching system may include input modules coupled to a virtual output queue, central modules coupled to the input modules, and output modules coupled to the central modules, each output module having a plurality of cross-point buffers for storing a packet and one or more output ports for outputting the packet.
    Type: Grant
    Filed: April 8, 2009
    Date of Patent: March 31, 2015
    Assignee: Empire Technology Development LLC
    Inventors: Roberto Rojas-Cessa, Chuan-bi Lin, Ziqian Dong
  • Publication number: 20150078398
    Abstract: A method for hash perturbation with queue management in data communication is provided. Using a first set of old queues corresponding to a first hash function, a set of data packets corresponding to a set of session is queued. At a first time, the first hash function is changed to a second hash function. A second set of new queues is created corresponding to the second hash function. A data packet is dequeued from a first old queue in a set of old queues. A second data packet is selected from a second queue in the set of old queues. A new hash value is computed for the second data packet using the second hash function. The second data packet is queued in a first new queue such that the second packet is in position to be delivered first from the first new queue.
    Type: Application
    Filed: December 6, 2013
    Publication date: March 19, 2015
    Applicant: International Business Machines Corporation
    Inventor: PAUL EDWARD MCKENNEY
  • Publication number: 20150049771
    Abstract: The packet processing method includes receiving a first packet, selecting a first storage area from a plurality of storage areas included in a buffer as a packet storage area in accordance with first time at which the first packet is received, and storing the first packet into the selected first storage area. The first storage area is selected as the packet storage area for the other packets received when a predetermined time period has passed from the first time.
    Type: Application
    Filed: July 1, 2014
    Publication date: February 19, 2015
    Applicant: FUJITSU LIMITED
    Inventors: Shigeo Konriki, Satoshi Namura, Masatoshi Yamamoto, Hisaya Ogasawara, Atsunori Yamamoto
  • Patent number: 8954957
    Abstract: Network devices include hosted virtual machines and virtual machine applications. Hosted virtual machines and their applications implement additional functions and services in network devices. Network devices include data taps for directing network traffic to hosted virtual machines and allowing hosted virtual machines to inject network traffic. Network devices include unidirectional data flow specifications, referred to as hyperswitches. Each hyperswitch is associated with a hosted virtual machine and receives network traffic received by the network device from a single direction. Each hyperswitch processes network traffic according to rules and rule criteria. A hosted virtual machine can be associated with multiple hyperswitches, thereby independently specifying the data flow of network traffic to and from the hosted virtual machine from multiple networks.
    Type: Grant
    Filed: July 1, 2009
    Date of Patent: February 10, 2015
    Assignee: Riverbed Technology, Inc.
    Inventors: David Tze-Si Wu, Kand Ly, Lap Nathan Trac, Alexei Potashnik
  • Publication number: 20150016467
    Abstract: A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory.
    Type: Application
    Filed: September 12, 2014
    Publication date: January 15, 2015
    Inventor: Richard M. WYATT
  • Patent number: 8902915
    Abstract: A context-free (stateless) dataport may allow multiple processors to perform read and write operations on a shared memory. The operations may include, for example, structured data operations such as image and video operations. The dataport may perform addressing computations associated with block memory operations. Therefore, the dataport may be able, for example, to relieve the processors that it serves from this duty. The dataport may be accessed using a message interface that may be implemented in a standard and generalized manner and that may therefore be easily transportable between different types of processors.
    Type: Grant
    Filed: September 24, 2012
    Date of Patent: December 2, 2014
    Assignee: Intel Corporation
    Inventors: Dinakar Munagala, Hong Jiang, Bishara Shomar, Val Cook, Michael K. Dwyer, Thomas Piazza
  • Publication number: 20140321473
    Abstract: An active output buffer controller is used for controlling a packet data output of a main buffer in a network device. The active output buffer controller has a credit evaluation circuit and a control logic. The credit evaluation circuit estimates a credit value based on at least one of an ingress data reception status of the network device and an egress data transmission status of the network device. The control logic compares the credit value with a first predetermined threshold value to generate a comparison result, and controls the packet data output of the main buffer according to at least the comparison result.
    Type: Application
    Filed: March 31, 2014
    Publication date: October 30, 2014
    Applicant: MEDIATEK INC.
    Inventors: Yu-Hsun Chen, Yi-Hsin Yu, Ming-Shi Liou, Ming Zhang
  • Publication number: 20140321474
    Abstract: An output queue of a multi-plane network device includes a first processing circuit, a plurality of storage devices and a second processing circuit. The first processing circuit generates packet selection information based on an arrival sequence of a plurality of packets. The storage devices store a plurality of packet linked lists for the output queue. The second processing circuit dequeues a packet from the output queue by selecting a linked list entry from the packet linked lists according to the packet selection information.
    Type: Application
    Filed: April 15, 2014
    Publication date: October 30, 2014
    Applicant: MEDIATEK INC.
    Inventors: Li-Lien Lin, Ta Hsing Liu, Jui-Tse Lin
  • Publication number: 20140314098
    Abstract: A method and an apparatus for adaptively coping with a network environment are provided. The method and apparatus includes a packet descriptor for forwarding of Media Transport (MMT) packets in a network process of a switch or a router for processing MMT packets forwarding content expressed in a structure of an MMT standard. The method of managing a queue in a broadcasting system includes receiving a Moving Picture Experts Group (MPEG) MMT packet, obtaining a header of the MMT packet, and queuing the MMT packet according to a type value of a bitrate included in the header of the MMT packet.
    Type: Application
    Filed: April 17, 2014
    Publication date: October 23, 2014
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Kyung-Mo PARK, Sung-Ryeul RHYU, Sung-Oh HWANG, Jae-Yeon SONG
  • Patent number: 8855129
    Abstract: A method for transmitting packets, the method includes receiving multiple packets at multiple queues. The method is characterized by dynamically defining fixed priority queues and weighted fair queuing queues, and scheduling a transmission of packets in response to a status of the multiple queues and in response to the definition. A device for transmitting packets, the device includes multiple queues adapted to receive multiple packets. The device includes a circuit that is adapted to dynamically define fixed priority queues and weighted fair queuing queues out of the multiple queues and to schedule a transmission of packets in response to a status of the multiple queues and in response to the definition.
    Type: Grant
    Filed: June 7, 2005
    Date of Patent: October 7, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Boaz Shahar, Freddy Gabbay, Eyal Soha
  • Patent number: 8837503
    Abstract: Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: September 16, 2014
    Assignee: Unbound Networks, Inc.
    Inventors: Damon Finney, Ashok Mathur
  • Patent number: 8837504
    Abstract: A buffer temporarily stores data received from a network by a receiving unit. An output mode switching unit switches the mode in which the data received by the receiving unit is output to the buffer, between FIFO and FILO, in accordance with the storage amount of data temporarily stored in the buffer. For example, if the data temporarily stored in the buffer falls below a given threshold value of the buffer, data is stored in the buffer in FIFO. If the data temporarily stored in the buffer exceeds a given threshold value of the buffer, data is stored in the buffer in FILO. A sending unit outputs data taken from the buffer in FIFO or FILO, to a network.
    Type: Grant
    Filed: November 6, 2009
    Date of Patent: September 16, 2014
    Assignee: Fujitsu Limited
    Inventor: Atsushi Shinozaki
  • Patent number: 8837502
    Abstract: A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory.
    Type: Grant
    Filed: May 9, 2012
    Date of Patent: September 16, 2014
    Assignee: Conversant Intellectual Property Management Incorporated
    Inventor: Richard M. Wyatt
  • Patent number: 8824491
    Abstract: Scheduling methods and apparatus are provided for an input-queued switch. The exemplary distributed scheduling process achieves 100% throughput for any admissible Bernoulli arrival traffic. The exemplary distributed scheduling process includes scheduling variable size packets. The exemplary distributed scheduling process may be easily implemented with a low-rate control or by sacrificing the throughput by a small amount. Simulation results also showed that this distributed scheduling process can provide very good delay performance for different traffic patterns. The exemplary distributed scheduling process may therefore be a good candidate large-scale high-speed switching systems.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: September 2, 2014
    Assignee: Polytechnic Institute of New York University
    Inventors: Shivendra S. Panwar, Yanming Shen, Shunyuan Ye
  • Patent number: 8817806
    Abstract: An apparatus and a method for flow control between a Packet Data Convergence Protocol (PDCP) layer and a Radio Link Control (RLC) layer in a communication system are provided. The method includes storing Service Data Units (SDUs) to be transferred to the RLC layer, receiving information on a capacity that is currently unused in a buffer of the RLC layer from the RLC layer, and generating Packet Data Units (PDUs) from SDUs, a capacity of which corresponds to the information, among packets stored in a buffer of the PDCP layer, and then transferring the generated PDUs to the RLC layer.
    Type: Grant
    Filed: February 1, 2011
    Date of Patent: August 26, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-Sook Kim, Byung-Suk Kim, Seong-Ryong Kang, Chul-Ki Lee, Hong-Kyu Jeong
  • Patent number: 8811417
    Abstract: A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation.
    Type: Grant
    Filed: November 15, 2010
    Date of Patent: August 19, 2014
    Assignee: Mellanox Technologies Ltd.
    Inventors: Noam Bloch, Gil Bloch, Ariel Shachar, Hillel Chapman, Ishai Rabinovitz, Pavel Shamis, Gilad Shainer
  • Publication number: 20140226677
    Abstract: The present subject-matter relates to transmitting a real-time data stream, namely simultaneously to multiple receivers over unreliable networks (e.g. wireless multicast), in a timely and reliable manner, in particular to a method, apparatus and computer program product for feedback-based real-time network coding. It is disclosed a computer-implemented method for a transmitting node, a receiving node, and an intermediate node of feedback-based real-time network coding from a transmitter and to one or more receivers, in particular comprising a linear combination of packets from the transmitter; determining whether the received linear combination of packets is linearly independent of previous linear combinations of packets; determining the validity of a priority level of the packets; determining validity of the deadline of the packets; determining whether a packet is to be removed from a transmit queue, and if determined removing it. There are also disclosed said transmitting, receiving, and intermediate nodes.
    Type: Application
    Filed: September 3, 2012
    Publication date: August 14, 2014
    Applicant: UNIVERSIDADE DO PORTO
    Inventors: Rui Filipe Mendes Alves da Costa, Diogo Joao De Sousa Ferreira, Joao Francisco Cordeiro De Oliveira Barros
  • Patent number: 8767722
    Abstract: A switching network includes an upper tier having a master switch and a lower tier including a plurality of lower tier entities. The master switch, which has a plurality of ports each coupled to a respective lower tier entity, implements on each of the ports a plurality of virtual ports each corresponding to a respective one of a plurality of remote physical interfaces (RPIs) at the lower tier entity coupled to that port. Data traffic communicated between the master switch and RPIs is queued within virtual ports that correspond to the RPIs with which the data traffic is communicated. The master switch applies data handling to the data traffic in accordance with a control policy based at least upon the virtual port in which the data traffic is queued, such that the master switch applies different policies to data traffic queued to two virtual ports on the same port of the master switch.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: July 1, 2014
    Assignee: International Business Machines Corporation
    Inventors: Keshav Kamble, Amitabha Biswas, Dar-Ren Leu, Chandarani J. Mendon, Nilanjan Mukherjee, Vijoy Pandey
  • Publication number: 20140177644
    Abstract: Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses.
    Type: Application
    Filed: December 2, 2013
    Publication date: June 26, 2014
    Applicant: Unbound Networks, Inc.
    Inventors: Damon Finney, Ashok Mathur
  • Patent number: 8750337
    Abstract: An apparatus and method for managing a preference channel are provided which can reduce a delay time when a channel change is made between preference channels, by storing a preference channel directly selected by a mobile terminal user in a mobile broadcast system. A mobile terminal receives and demultiplexes multiplexed logical channels on which data streams are transmitted from a service provider through a communication network. Preference channels of the demultiplexed logical channels are dynamically allocated to decoding buffers using stored information. A decoding time at which data of the decoding buffers is accessed and decoded is computed using reference information. A decoding operation is performed at the computed decoding time, and decoded channel-by-channel elementary streams are stored in a memory. The decoded channel-by-channel elementary streams stored in the memory are displayed on a screen of the mobile terminal.
    Type: Grant
    Filed: October 16, 2006
    Date of Patent: June 10, 2014
    Assignees: Samsung Electronics Co., Ltd., Industry-University Cooperation Foundation Hanyang University
    Inventors: Young-Joo Song, Ki-Ho Jung, Young-Kwon Lim, Je-Chang Jeong, Kook-Heui Lee, Jae-Hyun Park
  • Publication number: 20140153582
    Abstract: The present invention generally provides a packet buffer random access memory (PBRAM) device including a memory array, a plurality of input ports, and a plurality of serial registers associated with the input ports. The plurality of input ports permit multiple devices to concurrently access the memory in a non-blocking manner. The serial registers enable receiving data from the input ports and concurrently packet data to the memory array. The memory performs all management of network data queues so that all port requests can be satisfied within the real-time constraints of network packet switching.
    Type: Application
    Filed: February 7, 2014
    Publication date: June 5, 2014
    Applicant: MOSAID Technologies Incorporated
    Inventor: David E. JONES
  • Patent number: 8730982
    Abstract: A network device for processing data includes at least one ingress module for performing switching functions on incoming data, a memory management unit for storing the incoming data and at least one egress module for transmitting the incoming data to at least one egress port. The at least one egress module includes an egress scheduling module and multiple queues per each of the at least one egress port. Each of the multiple queues serve data attributable to a class of service, and the egress scheduling module is configured to service a minimum bandwidth requirement for each of the multiple queues and then to service the multiple queues to allow for transmission of a maximum allowable bandwidth through a weighting of each of the multiple queues.
    Type: Grant
    Filed: November 9, 2006
    Date of Patent: May 20, 2014
    Assignee: Broadcom Corporation
    Inventors: Chien-Hsien Wu, Bruce Kwan, Philip Chen
  • Patent number: 8706903
    Abstract: An audio-on-demand communication system provides real-time playback of audio data transferred via telephone lines or other communication links. One or more audio servers include memory banks which store compressed audio data. At the request of a user at a subscriber PC, an audio server transmits the compressed audio data over the communication link to the subscriber PC. The subscriber PC receives and decompresses the transmitted audio data in less than real-time using only the processing power of the CPU within the subscriber PC. According to one aspect of the present invention, high quality audio data compressed according to lossless compression techniques is transmitted together with normal quality audio data. According to another aspect of the present invention, metadata, or extra data, such as text, captions, still images, etc., is transmitted with audio data and is simultaneously displayed with corresponding audio data.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: April 22, 2014
    Assignee: Intel Corporation
    Inventors: Robert D. Glaser, Mark O'Brien, Thomas B. Boutell, Randy Glen Goldberg
  • Patent number: 8665895
    Abstract: Advanced and dynamic physical layer device capabilities utilizing a link interruption signal. The physical layer device can use a link interruption signal to signal to a media access controller device that the link has temporarily been interrupted. This link interruption signal can be generated in response to one or more programmable modes of the physical layer device that are used to support the advanced and dynamic physical layer device capabilities.
    Type: Grant
    Filed: December 30, 2010
    Date of Patent: March 4, 2014
    Assignee: Broadcom Corporation
    Inventors: Wael William Diab, Scott Powell
  • Patent number: 8665894
    Abstract: A mechanism for combining plurality of point-to-point data channels to provide a high-bandwidth data channel having an aggregated bandwidth equivalent to the sum of the bandwidths of the data channels used is provided. A mechanism for scattering segments of incoming data packets, called data chunks, among available point-to-point data channel interfaces is further provided. A decision as to the data channel interface over which to send a data chunk to can be made by examining a fullness status of a FIFO coupled to each interface. An identifier of a data channel on which to expect a subsequent data chunk can be provided in a control word associated with a present chunk of data. Using such information in control words, a receive-end interface can reassemble packets by looking to the control word in a currently processing data chunk to find a subsequent data chunk.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: March 4, 2014
    Assignee: Cisco Technology, Inc.
    Inventors: Yiren R. Huang, Raymond Kloth
  • Patent number: 8649389
    Abstract: Transmitting from a mobile terminal to a telecommunication network data stored in a plurality of queues, each queue having a respective transmission priority, includes setting the data in each of the queues to be either primary data or secondary data, or a combination of primary data and secondary data. The data may be transmitted from the queues in an order in dependence upon the priority of the queue and whether the data in that queue are primary data or secondary data. Resources for data transmission may be allocated such that the primary data of each of the queues are transmitted at a minimum predetermined rate and such that the secondary data of each of the queues are transmitted at a maximum predetermined rate, greater than the minimum predetermined rate.
    Type: Grant
    Filed: March 30, 2007
    Date of Patent: February 11, 2014
    Assignee: Vodafone Group Services Limited
    Inventors: David Fox, Alessandro Goia
  • Patent number: 8644143
    Abstract: In a passive optical network, dynamic bandwidth allocation and queue management methods and algorithms, designed to avoid fragmentation loss, guarantee that a length of a grant issued by an OLT will match precisely the count for bytes to be transmitted to an ONU. The methods include determining an ONU uplink transmission egress based on a three-stage test, and various embodiments of methods for ONU report 700 threshold setting.
    Type: Grant
    Filed: February 1, 2011
    Date of Patent: February 4, 2014
    Assignee: PMC-Sierra Israel Ltd.
    Inventors: Onn Haran, Ariel Maislos, Barak Lifshitz
  • Patent number: 8630304
    Abstract: A switch includes a reserved pool of buffers in a shared memory. The reserved pool of buffers is reserved for exclusive use by an egress port. The switch includes pool select logic which selects a free buffer from the reserved pool for storing data received from an ingress port to be forwarded to the egress port. The shared memory also includes a shared pool of buffers. The shared pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the shared pool upon detecting no free buffer in the reserved pool. The shared memory may also include a multicast pool of buffers. The multicast pool of buffers is shared by a plurality of egress ports. The pool select logic selects a free buffer in the multicast pool upon detecting an IP Multicast data packet received from an ingress port.
    Type: Grant
    Filed: August 5, 2011
    Date of Patent: January 14, 2014
    Assignee: MOSAID Technologies Incorporated
    Inventor: David A. Brown
  • Patent number: 8625427
    Abstract: One embodiment of the present invention provides a system that facilitates flow control of multi-path-switched data frames. During operation the system transmits from an ingress edge device data frames destined to an egress edge device across different switched paths based on queue status of a core switching device and queue status of the egress edge device. The egress edge device is separate from the core switching device.
    Type: Grant
    Filed: September 3, 2009
    Date of Patent: January 7, 2014
    Assignee: Brocade Communications Systems, Inc.
    Inventors: John M. Terry, Joseph Juh-En Cheng, Jan Bialkowski
  • Publication number: 20140003238
    Abstract: Systems and methods for buffering or delaying data packets to facilitate optimized distribution of content associated with the data packets are disclosed. Data packets sent from content providers may be received by a service provider, where information associated with the data packets may be identified. The identified information may be used to associate the data packets with an intended destination for the data packets and identify a type of the data packets, such as a video data packet. Video data packets (and other data packets) may be buffered or delayed such that more time-critical data packets may be accelerated into one or more QAM channels carrying the content. The delayed data packets may be time-sliced into the one or more QAM channels, which may displace empty slots or gaps that may waste bandwidth associated with the QAM channels.
    Type: Application
    Filed: July 2, 2012
    Publication date: January 2, 2014
    Applicant: COX COMMUNICATIONS, INC.
    Inventor: Jeff Finkelstein
  • Publication number: 20130329748
    Abstract: A crossbar switch has N input ports, M output ports, and a switching matrix with N×M crosspoints. In an embodiment, each crosspoint contains an internal queue (XQ), which can store one or more packets to be routed. Traffic rates to be realized between all Input/Output (IO) pairs of the switch are specified in an N×M traffic rate matrix, where each element equals a number of requested cell transmission opportunities between each IO pair within a scheduling frame of F time-slots. An efficient algorithm for scheduling N traffic flows with traffic rates based upon a recursive and fair decomposition of a traffic rate vector with N elements, is proposed. To reduce memory requirements a shared row queue (SRQ) may be embedded in each row of the switching matrix, allowing the size of all the XQs to be reduced. To further reduce memory requirements, a shared column queue may be used in place of the XQs.
    Type: Application
    Filed: May 29, 2013
    Publication date: December 12, 2013
    Inventor: Tadeusz H. Szymanski
  • Publication number: 20130315261
    Abstract: An in-band signaling model media control (MC) terminal for an HPNA network includes a frame classification entity (FCE) and a frame scheduling entity (FSE) and provides end-to-end Quality of Service (QoS) by passing the QoS requirements from higher layers to the lower layers of the HPNA network. The FCE is located at an LLC sublayer of the MC terminal, and receives a data frame from a higher layer of the MC terminal that is part of a QoS stream. The FCE classifies the received data frame for a MAC sublayer of the MC terminal based on QoS information contained in the received data frame, and associates the classified data frame with a QoS stream queue corresponding to a classification of the data frame. The FSE is located at the MAC sublayer of the MC terminal, and schedules transmission of the data frame to a destination for the data frame based on a QoS requirement associated with the QoS stream.
    Type: Application
    Filed: August 9, 2013
    Publication date: November 28, 2013
    Applicant: AT&T Intellectual Property II, L.P.
    Inventor: Wei Lin
  • Patent number: 8594113
    Abstract: Embodiments of a transmit-side scaler and method for processing outgoing information packets using thread-based queues are generally described herein. Other embodiments may be described and claimed. In some embodiments, a process ID stored in a token area may be compared with a process ID of an application that generated an outgoing information packet to obtain a transmit queue. The token area may be updated with a process ID stored in an active threads table when the process ID stored in the token area does not match the process ID of the application.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: November 26, 2013
    Assignee: Cisco Technology, Inc.
    Inventor: Shrijeet Mukherjee
  • Patent number: 8571049
    Abstract: A device may include a first line card and a second line card. The first line card may include a memory including queues. In addition, the first line card may include a processor. The processor may identify, among the queues, a queue whose size is to be modified, change the size of the identified queue, receive a packet, insert a header cell associated with the packet in the identified queue, identify a second line card from which the packet is to be sent to another device in a network, remove the header cell from the identified queue, and forward the header cell to the second line card. The second line card may receive the header cell from the first line card, and send the packet to the other device in the network.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: October 29, 2013
    Assignee: Verizon Patent and Licensing, Inc.
    Inventors: Dante J. Pacella, Norman Richard Solis, Harold Jason Schiller
  • Patent number: 8559439
    Abstract: A method and apparatus for queue-ordering commands in multi-engines, multi-queues and/or multi-flows environment is provided. Commands from single/multiple queues and multi-flows are processed by multi-engines with different processing time and/or out of order, which breaks sequential order of commands from same input queue and commands are distributed across multiple engines' output buffer after processing. Processed commands are stored in dedicated command output buffer associated with each engine temporarily. The processed commands are re-ordered while writing out. Also commands can be scheduled to idle engines to achieve maximum throughput, thus utilizing the engines in an optimal manner.
    Type: Grant
    Filed: November 3, 2011
    Date of Patent: October 15, 2013
    Assignee: PMC-Sierra US, Inc.
    Inventors: Anil B. Dongare, Kuan Hua Tan
  • Patent number: 8553545
    Abstract: A network device may handle packet congestion in a network. In one implementation, the network device may receive a packet associated with a quality of service priority class and with a connection to a user device. The network device may include an output queue associated with the priority class of the packet. The output queue may be congested. The network device may determine whether the connection associated with the packet is a guaranteed bit rate connection. The network device may queue the packet according to a first action policy function when the connection associated with the packet is a guaranteed bit rate connection and may queue the packet according to a second action policy function when the connection associated with the packet is not a guaranteed bit rate connection.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: October 8, 2013
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jay J. Lee, Deepak Kakadia, Thomas Tan
  • Patent number: 8547846
    Abstract: A packet is classified into a class. A priority value is assigned to the packet wherein packets in a flow are assigned priorities according to some probability distribution within some band. A determination is made, at a network device for a highest latency class, whether a sum of queued packet sizes of previously received packets having an equal or smaller latency class than the packet and larger or equal priority than the packet is larger than a threshold value. When the sum is larger, the packet is dropped, otherwise a determination is made whether a latency class of the packet is less than the latency class of the network device. When the latency class is not less, the packet is stored in a queue for the latency class. When the latency class is less, then the process is repeated until the packet is dropped or stored in a queue.
    Type: Grant
    Filed: August 15, 2011
    Date of Patent: October 1, 2013
    Assignee: Raytheon BBN Technologies Corp.
    Inventors: Laura Jane Poplawski Ma, Frank Kastenholtz, Gregory Stephen Lauer, Walter Clark Milliken, Gregory Donald Troxel