Having Input Queuing Only Patents (Class 370/415)
  • Patent number: 8411593
    Abstract: A space switch includes a buffer having a plurality of serial inputs, a plurality of de-serializers, each coupled to a respective input, a plurality n of buffers and a media access controller having inputs coupled to the plurality of de-serializers, data outputs coupled to the buffers, and two control outputs coupled to respective buffers for buffering input data at a clock rate one-nth that of the input data and a switch fabric connected to the buffers for matching buffer data throughput with switch data throughput. Preferably the buffer is a bifurcate buffer. This space switch described ensures matching of buffer and switch fabric throughput.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: April 2, 2013
    Assignee: IDT Canada Inc
    Inventor: David Brown
  • Patent number: 8369219
    Abstract: A system for managing bandwidth use in a device. In a specific embodiment, the device is a network device that includes a first data scheduler that is adapted to initially share available device bandwidth among a first type of traffic and a second type of traffic on an as-needed basis. A traffic monitor communicates with the first scheduler and causes the first data scheduler to guarantee predetermined transmission characteristics for the second type of traffic. The first data scheduler includes one or more routines for prioritizing first type of traffic above the second type of traffic when the network device is in a first operational mode, and prioritizing the second type of traffic above the first type of traffic when the network device is in a second operation al mode. The minimum transmission characteristics include a minimum service rate and a minimum latency for the second type of traffic.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: February 5, 2013
    Assignee: Cisco Technology, Inc.
    Inventors: Dipankar Bhatt Acharya, Hugh Holbrook, Fusun Ertemalp
  • Patent number: 8370545
    Abstract: A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.
    Type: Grant
    Filed: February 3, 2012
    Date of Patent: February 5, 2013
    Assignee: Net Navigation Systems, LLC
    Inventors: Andrew Li, Michael Lau, Asad Khamisy
  • Patent number: 8355338
    Abstract: A method of processing sequential information in near real-time data packets streamed over a network includes providing a process running according to a process clock. The process buffers and decodes the streamed data packets. The speed of the process clock is dynamically controlled in accordance with a receipt time value of a data packet. The speed of the process clock is run faster or slower than a system clock.
    Type: Grant
    Filed: July 14, 2009
    Date of Patent: January 15, 2013
    Assignee: Hong Kong Applied Science and Technology Research Institute Co. Ltd.
    Inventor: Wai Keung Wu
  • Patent number: 8351428
    Abstract: A digital broadcast transmitting/receiving system and a method for processing data are disclosed. The method for processing data may enhance the receiving performance of the receiving system by performing additional coding and multiplexing processes on the traffic information data and transmitting the processed data. Thus, robustness is provided to the traffic information data, thereby enabling the data to respond strongly against the channel environment which is always under constant and vast change.
    Type: Grant
    Filed: January 5, 2010
    Date of Patent: January 8, 2013
    Assignee: LG Electronics Inc.
    Inventors: Jin Pil Kim, Young In Kim, Ho Taek Hong, In Hwan Choi, Kook Yeon Kwak, Hyoung Gon Lee, Byoung Gill Kim, Jin Woo Kim, Jong Moon Kim, Won Gyu Song
  • Patent number: 8331377
    Abstract: Embodiments disclosed here relate to scheduling packet transmission in a multi-carrier communication system. In an embodiment, a master scheduler having at least one processor and at least one memory operably connected to the at least one processor is adapted to execute instructions stored in the at least one memory, the instructions comprising selecting a packet with a highest packet metric from among candidate packets from one carrier of a plurality of carriers, whereby expedited forwarding flows do not have a higher metric on another carrier.
    Type: Grant
    Filed: February 27, 2006
    Date of Patent: December 11, 2012
    Assignee: QUALCOMM Incorporated
    Inventors: Rashid A. Attar, Peter J. Black, Mehmet Gurelli, Mehmet Yavuz, Naga Bhushan
  • Patent number: 8325736
    Abstract: A hierarchy of schedules propagate minimum guaranteed scheduling rates among scheduling layers in a hierarchical schedule. The minimum guaranteed scheduling rate for a parent schedule entry is typically based on the summation of the minimum guaranteed scheduling rates of its immediate child schedule entries. This propagation of minimum rate scheduling guarantees for a class of traffic can be dynamic (e.g., based on the active traffic for this class of traffic, active services for this class of traffic), or statically configured. One embodiment also includes multiple scheduling lanes for scheduling items, such as, but not limited to packets or indications thereof, such that different categories of traffic (e.g., propagated minimum guaranteed scheduling rate, non-propagated minimum guaranteed scheduling rate, high priority, excess rate, etc.) of scheduled items can be propagated through the hierarchy of schedules accordingly without being blocked behind a lower priority or different type of traffic.
    Type: Grant
    Filed: April 18, 2009
    Date of Patent: December 4, 2012
    Assignee: Cisco Technology, Inc.
    Inventors: Earl T. Cohen, Robert Olsen, Christopher J. Kappler, Anna Charny
  • Patent number: 8325735
    Abstract: One embodiment includes distributing user traffic packets to a plurality of queues, and draining the queues of the user traffic packets according to a defined methodology. The drained user traffic packets are sent to a plurality of physical channel interfaces. Each of the plurality of physical channel interfaces interfaces with a respective channel of the backhaul. The sending step sends each of the drained user traffic packets to the physical channel.
    Type: Grant
    Filed: June 28, 2007
    Date of Patent: December 4, 2012
    Assignee: Alcatel Lucent
    Inventors: Mohammad Riaz Khawer, Mark H. Kraml, Stephen George Pisano, Tomas S. Yang
  • Patent number: 8320247
    Abstract: A method may include receiving a data unit and identifying a state of a memory storing data units. The method may include selecting a threshold value having a first threshold unit or a second threshold unit based on the state of the memory. The method may include comparing the threshold value to a queue state using the first threshold unit if the memory is in a first state. The method may include comparing the threshold value to the queue state using the second threshold unit if the memory is in a second state.
    Type: Grant
    Filed: April 23, 2010
    Date of Patent: November 27, 2012
    Assignee: Juniper Networks, Inc.
    Inventors: Paul J. Giacobbe, John C. Carney
  • Patent number: 8315268
    Abstract: A machine implemented method and system for communication between a computing system and an adapter is provided. An application from among a plurality of applications sends a message to the adapter with a value V. The adapter queues the message at the first storage location and writes the value V at a second storage location after the message is successfully queued at the first storage location. To determine if the message was successfully queued, the computing system reads the written value at the second storage location and compares it to the value V that was sent with the message.
    Type: Grant
    Filed: June 7, 2010
    Date of Patent: November 20, 2012
    Assignee: QLOGIC, Corporation
    Inventors: Kanoj Sarcar, Sanjeev Jorapur
  • Patent number: 8259739
    Abstract: A mechanism for combining plurality of point-to-point data channels to provide a high-bandwidth data channel having an aggregated bandwidth equivalent to the sum of the bandwidths of the data channels used is provided. A mechanism for scattering segments of incoming data packets, called data chunks, among available point-to-point data channel interfaces is further provided. A decision as to the data channel interface over which to send a data chunk to can be made by examining a fullness status of a FIFO coupled to each interface. An identifier of a data channel on which to expect a subsequent data chunk can be provided in a control word associated with a present chunk of data. Using such information in control words, a receive-end interface can reassemble packets by looking to the control word in a currently processing data chunk to find a subsequent data chunk.
    Type: Grant
    Filed: October 31, 2005
    Date of Patent: September 4, 2012
    Assignee: Cisco Technology, Inc.
    Inventors: Yiren R. Huang, Raymond Kloth
  • Patent number: 8238346
    Abstract: A node in a mobile ad-hoc network or other network classifies packets (a) in accordance with a first set of priority levels based on urgency and (b) within each priority level of the first set, in accordance with a second set of priority levels based on importance. The node: (a) queues packets classified at highest priority levels of the first and/or second sets in high-priority output queues; (b) queues packets classified at medium priority levels of the first set in medium-priority output queue(s); and (3) queues packets classified at low priority levels of the first and/or second set in low-priority output queue(s). Using an output priority scheduler, the node serves the packets in order of the priorities of the output queues. In such manner, orthogonal aspects of DiffServ and MLPP can be resolved in a MANET or other network.
    Type: Grant
    Filed: July 9, 2009
    Date of Patent: August 7, 2012
    Assignee: The Boeing Company
    Inventors: Wayne R. Howe, Muhammad Akber Qureshi
  • Patent number: 8223641
    Abstract: A communications system provides a dynamic setting of optimal buffer sizes in IP networks. A method for dynamically adjusting buffer capacities of a router may include steps of monitoring a number of incoming packets to the router, determining a packet arrival rate, and determining the buffer capacities based at least partially on the packet arrival rate. Router buffers are controlled to exhibit the determined buffer capacities, e.g. during writing packets into and reading packets from each of the buffers as part of a packet routing performed by the router. In the disclosed examples, buffer size may be based on the mean arrival rate and one or more of mean packet size and mean waiting time.
    Type: Grant
    Filed: July 28, 2008
    Date of Patent: July 17, 2012
    Assignee: Cellco Partnership
    Inventors: Jay J. Lee, Thomas Tan, Deepak Kakadia, Emer M. Delos Reyes, Maria G. Lam
  • Patent number: 8208380
    Abstract: In a memory management system, data packets received by input ports are stored in a circular buffer and queues associated with output ports. To preserve the packets stored in the buffer and prevent a head-drop event from occurring, a lossless system control component sends flow control commands when the difference between the oldest read pointer and the write pointer is less than a configurable transmission off threshold. The flow control commands pause the transmission of data packets to the input ports while enabling output ports to transmit stored data packets. When the difference between the oldest read pointer and the write pointer exceeds a configurable transmission on threshold, the lossless system controller ceases issuing flow control commands, and the input ports can resume receiving data packets.
    Type: Grant
    Filed: November 21, 2007
    Date of Patent: June 26, 2012
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Youval Nachum, Carmi Arad
  • Patent number: 8194690
    Abstract: Packets are processed in a system that comprises a plurality of interconnected processor cores. The system receives packets into one or more queues. The system associates at least some nodes in a hierarchy of nodes with at least one of the queues, and at least some of the nodes with a rate. The system maps a set of one or more nodes to a processor core based on a level in the hierarchy of the nodes in the set and based on at least one rate associated with a node not in the set. The packets are processed in one or more processor cores including the mapped processor core according to the hierarchy.
    Type: Grant
    Filed: May 24, 2007
    Date of Patent: June 5, 2012
    Assignee: Tilera Corporation
    Inventors: Kenneth M. Steele, Vijay Aggarwal
  • Patent number: 8179896
    Abstract: A network processor of an embodiment includes a packet classification engine, a processing pipeline, and a controller. The packet classification engine allows for classifying each of a plurality of packets according to packet type. The processing pipeline has a plurality of stages for processing each of the plurality of packets in a pipelined manner, where each stage includes one or more processors. The controller allows for providing the plurality of packets to the processing pipeline in an order that is based at least partially on: (i) packet types of the plurality of packets as classified by the packet classification engine and (ii) estimates of processing times for processing packets of the packet types at each stage of the plurality of stages of the processing pipeline. A method in a network processor allows for prefetching instructions into a cache for processing a packet based on a packet type of the packet.
    Type: Grant
    Filed: November 7, 2007
    Date of Patent: May 15, 2012
    Inventor: Justin Mark Sobaje
  • Patent number: 8175085
    Abstract: A scaling device or striper improves the lane efficiency of switch fabric. The striper controls or adjusts transfer modes and payload sizes of a large variety of devices operating with different protocols. The striper interfaces between network devices and the switch fabric, and the resulting switching system is configurable by a single controller. A source device sends a data packet to its corresponding striper for transmission across the switch fabric to a destination device. The corresponding striper parses the packet to determine its type and payload length, and divides the packet into numerous smaller segments when the payload length exceeds a predetermined length. The segments may be stored in the striper to adapt to the available bandwidth of the switch. The segments are sent across the switch fabric and reassembled at a destination striper. The packet as reassembled is forwarded to the destination device.
    Type: Grant
    Filed: January 14, 2009
    Date of Patent: May 8, 2012
    Assignee: Fusion-io, Inc.
    Inventors: Kiron Malwankar, Daniel Talayco
  • Publication number: 20120093170
    Abstract: A method, computer program product, and apparatus for managing data packets are presented. A data packet in the data packets is stored in a first portion of a memory in response to receiving the data packet at a device. The first portion of the memory is allocated to the device. A determination is made whether a size of the data packet is less than a threshold size. The data packet is copied from the first portion of the memory allocated to the device to a second portion of the memory in response to a determination that the size of the data packet stored in the memory is less than the threshold size.
    Type: Application
    Filed: October 14, 2010
    Publication date: April 19, 2012
    Applicant: International Business Machines Corporation
    Inventors: Edgar O. Cantu, David R. Marquardt, Jose G. Rivera, Thinh H. Tran
  • Patent number: 8139502
    Abstract: A method of transforming an ordered list of nodes of a network into one of a plurality of elite ordered lists, the ordered list corresponding to a deloading sequence, the deloading sequence including a temporary capacity requirement, each of the elite ordered lists corresponding to an elite deloading sequence including an elite temporary capacity requirement by generating at least one intermediate ordered list corresponding to an intermediate deloading sequence including an intermediate temporary capacity requirement, selecting one of the intermediate ordered list and the ordered list based on a comparison of the intermediate temporary capacity requirement and the temporary capacity requirement and replacing one of the elite ordered lists with the one of the intermediate ordered list and the ordered list if a value corresponding to one of the intermediate temporary capacity requirement and the temporary capacity requirement is less than a lowest value of the elite temporary capacity requirements.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: March 20, 2012
    Assignee: AT & T Intellectual Property I, LP
    Inventors: Mauricio Guilherme de Carvalho Resende, Diogo Vieira Andrade
  • Patent number: 8135886
    Abstract: A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: March 13, 2012
    Assignee: Net Navigation Systems, LLC
    Inventors: Andrew Li, Michael Lau, Asad Khamisy
  • Patent number: 8131869
    Abstract: An audio-on-demand communication system provides real-time playback of audio data transferred via telephone lines or other communication links. One or more audio servers include memory banks which store compressed audio data. At the request of a user at a subscriber PC, an audio server transmits the compressed audio data over the communication link to the subscriber PC. The subscriber PC receives and decompresses the transmitted audio data in less than real-time using only the processing power of the CPU within the subscriber PC. According to one aspect of the present invention, high quality audio data compressed according to lossless compression techniques is transmitted together with normal quality audio data. According to another aspect of the present invention, metadata, or extra data, such as text, captions, still images, etc., is transmitted with audio data and is simultaneously displayed with corresponding audio data.
    Type: Grant
    Filed: February 10, 2009
    Date of Patent: March 6, 2012
    Assignee: RealNetworks, Inc.
    Inventors: Robert D. Glaser, Mark O'Brien, Thomas B. Boutell, Randy Glen Goldberg
  • Patent number: 8111720
    Abstract: In VoIP systems, there is a tradeoff between reducing number of lost packets and end-to-end delay when dealing with jitters. Increasing the jitter buffer space on a mobile wireless terminal reduces the likelihood of lost packets but increases the end-to-end delay. Decreasing the jitter buffer space shortens the end-to-end delay, but there is a greater likelihood of retransmissions and dropped packets. Optimum solution can be arrived at if the jitter buffer space on the mobile wireless terminal can be matched to the scheduling delay. This is difficult to achieve in conventional system because the scheduling delay introduced by the network is unknown to the mobile wireless terminal. Thus, constant adjustment is required. One way to overcome this problem is to apprise the mobile wireless terminal of the maximum scheduling delay.
    Type: Grant
    Filed: December 10, 2007
    Date of Patent: February 7, 2012
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventor: Per Synnergren
  • Patent number: 8081588
    Abstract: A mobile communication device has a wireless transceiver and one or more processors for communicating data in a wireless communication system. The one or more processors are operative to receive a plurality of data packets of varying payload size in a queue; associate one or more of the data packets from the queue into a group, such that a total size of the group is at or near a maximum transmissible unit (MTU) size of a data frame; cause the one or more data packets associated into the group to be formatted into the data frame for data transmission via the wireless transceiver; and repeat, for a plurality of data frames, the associating and formatting, for communicating the data via the wireless transceiver in the wireless communication system. By associating the data packets into groups having the MTU size, data throughput of the data transmission is increased.
    Type: Grant
    Filed: June 8, 2007
    Date of Patent: December 20, 2011
    Assignee: Research In Motion Limited
    Inventor: Mark Pecen
  • Patent number: 8050685
    Abstract: An apparatus and method for UpLink (UL) radio resource allocation in a wideband wireless communication system are provided. In a method of operating a Relay Station (RS) for UpLink (UL) radio resource allocation in a wideband wireless communication system, the method includes relaying to a Base Station (BS) a resource request message of at least one or more Mobile Stations (MSs); receiving data from the at least one or more mobile stations; if the received data is non-real time traffic, queuing the data received from the mobile stations according to a traffic type; and requesting the base station to allocate necessary radio resources by checking a queue status. Accordingly, a delay can be reduced when the UL resource is allocated to an relay station for real time traffic.
    Type: Grant
    Filed: February 26, 2008
    Date of Patent: November 1, 2011
    Assignees: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Ki-Young Han, Jae-Woo So, Yong-Seok Kim, Sang-Wook Kwon, Ji-Hyun Park, Chi-Sung Bae, Dong-Ho Cho, Oh-Hyun Jo
  • Patent number: 8036239
    Abstract: A method for processing signals in a communication system is disclosed and may include pipelining processing of a received HSDPA bitstream within a single chip. The pipelining may include calculating a memory address for a current portion of a plurality of information bits in the received HSDPA bitstream, while storing on-chip, a portion of the plurality of information bits in the received HSDPA bitstream that is subsequent to the current portion. A portion of the plurality of information bits in the received HSDPA bitstream that is previous to the current portion may be decoded during the calculating and storing. The calculation of the memory address for the current portion of the plurality of information bits may be achieved without the use of a buffer. Processing of the plurality of information bits in the received HSDPA bitstream may be partitioned into a functional data processing path and functional address processing path.
    Type: Grant
    Filed: February 22, 2010
    Date of Patent: October 11, 2011
    Inventors: Li Fung Chang, Mark Hahm, Simon Baker
  • Patent number: 8027346
    Abstract: A method and system schedule data for dequeuing in a communication network. The communication network includes an eligible scheduling node, a scheduling context structure, and an existence of data structure. In response to determining that an eligible scheduling node does not contain at least one child identifier in the scheduling context structure, an eligible child is selected for dequeue from the existence of data structure. At least one eligible child from the existence of data structure is absorbed into the scheduling context structure. The at least one eligible child includes the child selected for dequeue. Absorbing a child includes removing the child identifier from the existence of data queue and adding the child identifier to the scheduling context structure.
    Type: Grant
    Filed: May 29, 2008
    Date of Patent: September 27, 2011
    Assignee: Avaya Inc.
    Inventors: Bradley D. Venables, David G. Stuart
  • Patent number: 8018958
    Abstract: Systems and methods consistent with the present invention provide a mechanism that can efficiently manage multiple queues and maintain fairness among ports while not placing additional performance demands on the memory used to store the queue data structures. Within a port, high priority traffic is dropped only if it is consuming more than its fair share of bandwidth allocated to that port. Queue arbitration is of low performance cost and simple because it arbitrates only across queues per port, rather than across all the queues in parallel. Accordingly, fair arbitration with relatively little hardware cost.
    Type: Grant
    Filed: June 23, 2009
    Date of Patent: September 13, 2011
    Assignee: Juniper Networks, Inc.
    Inventors: John Delmer Johnson, Abhijit Ghosh
  • Patent number: 8005971
    Abstract: An apparatus for communicating with a network comprises a queue and logic. The queue has at least one entry stored therein. The at least one entry respectively points to at least one data packet. The logic is configured to read the at least one entry from the queue and to retrieve the at least one data packet based on the at least one entry. The logic is configured to transition to a sleep state based on a determination that a new entry for reading, by the logic, from the queue is unavailable for a specified amount of time.
    Type: Grant
    Filed: February 8, 2003
    Date of Patent: August 23, 2011
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey Joel Walls, Michael Trent Hamilton
  • Patent number: 7983287
    Abstract: Roughly described, a packet switching fabric contains a separate queue scheduler for each combination of an input module and a fabric output port. The schedulers may also be specific to a single class of service. Each queue scheduler schedules its packets without regard to state of other input queues and without regard to packets destined for other output ports. In an aspect, the fabric manages per-flow bandwidth utilization of output port bandwidth capacity by monitoring the same and asserting backpressure toward the queue scheduler for any thread that is exceeding its bandwidth allocation. In another aspect, a switching fabric uses leaky buckets to apply backpressure in response to overutilization of downstream port capacity by particular subflows. In another aspect, a switching fabric includes a cascaded backpressure scheme.
    Type: Grant
    Filed: May 14, 2008
    Date of Patent: July 19, 2011
    Assignee: Agere Systems Inc.
    Inventors: John T. Musacchio, Jean Walrand, Roy T. Myers, Jr., Shyam P. Parekh, Jeonghoon Mo, Gaurav Agarwal
  • Patent number: 7961649
    Abstract: A circulating switch comprises switch modules of moderate capacities interconnected by a passive rotator. Data is sent from a one switch module to another switch module either directly, traversing the rotator once, or indirectly through at least one intermediate switch module where the rotator is traversed twice. A higher capacity extended circulating switch is constructed from higher-capacity switch modules, implemented as common memory switches and having multiple ports, interconnected through a multiplicity of rotators preferably arranged in complementary groups of rotators of opposite rotation directions. A polyphase circulating switch having a low switching delay is derived from a multi-rotator circulating switch by providing programmable rotators having adjustable relative rotator-cycle phases. A low delay high-capacity switch may also be constructed from prior-art medium-capacity rotator space switches with mutually phase-shifted rotation cycles.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: June 14, 2011
    Assignee: Nortel Networks Limited
    Inventor: Maged E. Beshai
  • Patent number: 7961721
    Abstract: A router for a network is arranged for guiding data traffic from one of a first plurality Ni of inputs (I) to one or more of a second plurality No of outputs (O). The inputs each have a third plurality m of input queues for buffering data. The third plurality m is greater than 1, but less than the second plurality No. The router includes a first selection facility for writing data received at an input to a selected input queue of the input, and a second selection facility for providing data from an input queue to a selected output. Pairs of packets having different destinations Oj and Ok are arranged in the same queue for a total number of Nj,k inputs, characterized in that Nj,k<N for each j,k.
    Type: Grant
    Filed: February 21, 2006
    Date of Patent: June 14, 2011
    Assignee: NXP B.V.
    Inventors: Theodorus Jacobus Denteneer, Ronald Rietman, Santiago Gonzalez Pestana, Nick Boot, Ivo Jean-Baptiste Adan
  • Patent number: 7949002
    Abstract: A First-In-First-Out (FIFO) block to buffer a packet having a size is presented. The FIFO block includes a receiver to receive a data frame including the packet and overhead information, and to extract the packet from the data frame. A buffer has a plurality of memory locations to store the packet in a FIFO configuration. A buffer manager, in response to detecting a buffer low packet condition, stalls reads of the packet from the buffer.
    Type: Grant
    Filed: February 15, 2008
    Date of Patent: May 24, 2011
    Assignee: Marvell International Ltd.
    Inventors: William Lo, Samuel Er-Shen Tang, Sabu Ghazali
  • Patent number: 7944936
    Abstract: An apparatus and method for connecting a plurality of computing devices, e.g. web servers, database servers, etc., to a plurality of storage devices, such as disks, disk arrays, tapes, etc., by using a stream-oriented (circuit oriented) switch that has high throughput, but that requires non-negligible time for reconfiguration is disclosed. An example of such stream-oriented switch is an optical switch. The system decodes the requests from the computing devices and uses this information to create circuits, e.g. optical paths in embodiments where the stream-oriented switch is an optical switch, through the stream-oriented switch. The system uses these circuits to route traffic between the computing devices and the storage devices. Buffering of data and control in the device memory is used to improve overall throughput and reduce the time spent on reconfigurations.
    Type: Grant
    Filed: June 23, 2006
    Date of Patent: May 17, 2011
    Assignee: NetApp, Inc.
    Inventors: Dan Avida, Serge Plotkin
  • Patent number: 7944825
    Abstract: In order to provide an ATM cell?packet switch which can easily maintain the band for the normality confirmation packet of the user data transfer path without influencing the user band at a state of in?band, and a communication control method using the switch, at least provided in the switch are: an SDRAM for storing the user data; a normality confirmation packet generator; a timing generator for generating the timing of a refresh cycle of the SDRAM; a selector for transferring the normality confirmation packet at the time of the refresh; and a packet reception unit for extracting the packet identifying information from the received packet data, and comparing the normality confirmation packet directly received from the packet generator to the normality confirmation packet received via the switch unit thereby to confirm the normality when the packet data is the normality confirmation packet.
    Type: Grant
    Filed: May 3, 2007
    Date of Patent: May 17, 2011
    Assignee: NEC Corporation
    Inventor: Yuichi Tazaki
  • Patent number: 7921241
    Abstract: A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.
    Type: Grant
    Filed: June 1, 2009
    Date of Patent: April 5, 2011
    Assignee: Applied Micro Circuits Corporation
    Inventors: Andrew Li, Michael Lau, Asad Khamisy
  • Patent number: 7916742
    Abstract: An embodiment of the invention provides for dynamically calibrating a jitter buffer based on a percentage used of the jitter buffer. Such a solution provides for efficiently adapting the size of a jitter buffer without the need for complex and processor intensive operations. By adjusting the size of a jitter buffer in a simple and dynamic fashion, undesirable delay can be removed from a service session, and gaps prevented. Similarly, delay can be easily introduced into a service session when necessary. In an embodiment of the invention, a communication system comprises a jitter buffer and a processing system. The jitter buffer is configured to buffer traffic. The processing system is configured to determine the percentage used of the jitter buffer by the buffered traffic, and calibrate the size of the jitter buffer in response to the percentage used of the jitter buffer.
    Type: Grant
    Filed: May 11, 2005
    Date of Patent: March 29, 2011
    Assignee: Sprint Communications Company L.P.
    Inventor: Michael K. Bugenhagen
  • Patent number: 7917656
    Abstract: A messaging service is described that incorporates messages into cached link lists. The messages are not yet acknowledged as having been received by one or more consumers to whom the messages were sent. A separate link list exists for each of a plurality of different message priority levels. Messages within a same link list are ordered in their link list in the same order in which they where received by the messaging service. At least one of the link lists contains an element that represents one or more messages that are persisted but are not cached in any of the cached link lists.
    Type: Grant
    Filed: December 29, 2005
    Date of Patent: March 29, 2011
    Assignee: SAP AG
    Inventors: Radoslav I. Nikolov, Desislav V. Bantchovski, Stoyan M. Vellev
  • Publication number: 20110069717
    Abstract: A data transfer device includes a plurality of input queues, a plurality of arbitration control units provided for the respective input queues, and an input queue selecting unit that selects any one of the input queues based on a priority set for each input queue, and outputs data from the selected input queue. Each arbitration control unit includes a register that stores therein a predetermined upper limit, a counter that counts the amount of data output from a corresponding input queue, and a control circuit that, when a value of the counter becomes equal to or greater than the upper limit stored in the register, causes the input queue selecting unit to update the priority and resets the value of the counter.
    Type: Application
    Filed: November 22, 2010
    Publication date: March 24, 2011
    Applicant: Fujitsu Limited
    Inventors: Hidekazu Osano, Takayuki Kinoshita, Yoshikazu Iwami, Makoto Hataida
  • Publication number: 20110051742
    Abstract: Apparatus for flexible sharing of bandwidth in switches with input buffering by dividing time into a plurality of frames of time slots, wherein each frame has a specified integer value of time slots. The apparatus includes modules where inputs sequentially select available outputs to which the inputs send packets in specified future time slots. The selection of outputs by the inputs is done using a pipeline technique and a schedule is calculated within multiple time slots.
    Type: Application
    Filed: September 8, 2010
    Publication date: March 3, 2011
    Inventor: Aleksandra Smiljanic
  • Patent number: 7899069
    Abstract: A method and system for transmitting packets in a packet switching network. Packets received by a packet processor may be prioritized based on the urgency to process them. Packets that are urgent to be processed may be referred to as real-time packets. Packets that are not urgent to be processed may be referred to as non-real-time packets. Real-time packets have a higher priority to be processed than non-real-time packets. A real-time packet may either be discarded or transmitted into a real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time queue congestion conditions. A non-real-time packet may either be discarded or transmitted into a non-real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time and non-real-time queue congestion conditions.
    Type: Grant
    Filed: May 3, 2008
    Date of Patent: March 1, 2011
    Assignee: International Business Machines Corporation
    Inventors: Brahmanand Kumar Gorti, Marco Heddes, Clark Debs Jeffries, Andreas Kind, Michael Steven Siegel
  • Patent number: 7873061
    Abstract: A technique for improved throughput at an access point (AP) involves when frames are received for transmission by the AP, queuing the frames for a particular station. A system constructed according to the technique may include an aggregation and queuing layer. Station queues may be processed by the aggregation and queuing layer before being given to radio hardware for transmission. In an illustrative embodiment, when frames are received by the aggregation and queuing layer, the packet will be assigned a target delivery time (TDT) and an acceptable delivery time (ADT). The TDT is the “ideal” time to transmit a frame, based on its jitter and throughput requirements. Frames are mapped on to a time axis for transmission by TDT. In an illustrative embodiment, each frame is mapped by priority, so that there are separate maps for voice, video, best effort, and background frames. There will be gaps between frames for transmission that can be used for aggregation.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: January 18, 2011
    Assignee: Trapeze Networks, Inc.
    Inventors: Matthew Stuart Gast, Richard Thomas Bennett
  • Patent number: 7865634
    Abstract: A method and apparatus to perform buffer management for media processing are described.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: January 4, 2011
    Assignee: Intel Corporation
    Inventor: Ling Chen
  • Patent number: 7836195
    Abstract: In one embodiment, the present invention includes a method for receiving a first packet associated with a first network flow in a first descriptor queue associated with a first hardware thread, receiving a marker in the first descriptor queue to indicate migration of the first network flow from the first hardware thread to a second hardware thread, and processing a second packet of the first network flow following the first packet in order in the second hardware thread.
    Type: Grant
    Filed: February 27, 2008
    Date of Patent: November 16, 2010
    Assignee: Intel Corporation
    Inventors: Bryan Veal, Annie Foong
  • Patent number: 7830793
    Abstract: The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: November 9, 2010
    Assignee: Cisco Technology, Inc.
    Inventors: Silvano Gai, Thomas Edsall, Davide Bergamasco, Dinesh Dutt, Flavio Bonomi
  • Publication number: 20100254399
    Abstract: A method for controlling uplink IP packet filtering in a mobile terminal in a 3GPP Evolved Packet System (EPS) is provided, including an information receiving operation of receiving IP address information allocated to user equipment, and filtering information required for delivering an uplink IP packet received from the user equipment; and a filtering operation for determining which packet data network and a bearer the IP packet is delivered to, based on the IP address information and the filtering information. In a 3GPP evolved packet system supporting a default bearer function, a packet data network to which an uplink IP packet is delivered and a bearer identifier can be efficiently determined when the user equipment simultaneously accesses one or more packet data networks and is allocated several IP addresses, resulting in effective uplink packet filtering.
    Type: Application
    Filed: May 22, 2008
    Publication date: October 7, 2010
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae Wook Shin, Kwang-Ryul Jung, Ae-Soon Park
  • Patent number: 7804805
    Abstract: A method and apparatus for scheduling the data packets transmitted to a plurality of mobile terminals supporting multiple quality of service (QoS) grades in a multichannel wireless communication system includes a storage device for storing queues and data packets of the mobile stations, the queue and data packets of each of the mobile stations being arranged in an order of the quality of service grades; and a scheduler for allocating resources of multiple channels to the mobile stations based on different scheduling metrics separately applied to the multiple channels according to the quality of service grades, each of the scheduling metrics applied to a particular one of channels being used to select one of the mobile stations whose data packets are transmitted through the particular channel; wherein entire data packets of the mobile stations are transmitted through the multiple channels when the allocation of the channel resources has been completed sequentially for each of the multiple channels.
    Type: Grant
    Filed: June 27, 2006
    Date of Patent: September 28, 2010
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Won-Hyoung Park, Sung-Hyun Cho, Dae-Young Park
  • Patent number: 7801163
    Abstract: A method for allocating space among a plurality of queues in a buffer includes sorting all the queues of the buffer according to size, thereby to establish a sorted order of the queues. At least one group of the queues is selected, consisting of a given number of the queues in accordance with the sorted order. A portion of the space in the buffer is allocated to the group, responsive to the number of the queues in the group. A data packet is accepted into one of the queues in the group responsive to whether the data packet will cause the space occupied in the buffer by the queues in the group to exceed the allocated portion of the space.
    Type: Grant
    Filed: April 13, 2006
    Date of Patent: September 21, 2010
    Inventors: Yishay Mansour, Alexander Kesselman
  • Patent number: 7756133
    Abstract: The invention relates to a method for processing a sequence of data packets in a receiver apparatus, in particular a sequence of audio and/or video data packets, as well as to a receiver apparatus.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: July 13, 2010
    Assignee: Thomson Licensing
    Inventor: Frank Gläser
  • Publication number: 20100150165
    Abstract: A method for processing signals in a communication system is disclosed and may include pipelining processing of a received HSDPA bitstream within a single chip. The pipelining may include calculating a memory address for a current portion of a plurality of information bits in the received HSDPA bitstream, while storing on-chip, a portion of the plurality of information bits in the received HSDPA bitstream that is subsequent to the current portion. A portion of the plurality of information bits in the received HSDPA bitstream that is previous to the current portion may be decoded during the calculating and storing. The calculation of the memory address for the current portion of the plurality of information bits may be achieved without the use of a buffer. Processing of the plurality of information bits in the received HSDPA bitstream may be partitioned into a functional data processing path and functional address processing path.
    Type: Application
    Filed: February 22, 2010
    Publication date: June 17, 2010
    Inventors: Li Fung Chang, Mark Hahm, Simon Baker
  • Patent number: RE43110
    Abstract: A Pipelined-based Maximal-sized Matching (PMM) scheduling approach for input-buffered switches relaxes the timing constraint for arbitration with a maximal matching scheme. In the PMM approach, arbitration may operate in a pipelined manner. Each subscheduler is allowed to take more than one time slot for its matching. Every time slot, one of them provides the matching result. The subscheduler can adopt a pre-existing efficient maximal matching algorithm such as iSLIP and DRRM. PMM maximizes the efficiency of the adopted arbitration scheme by allowing sufficient time for a number of iterations. PMM preserves 100% throughput under uniform traffic and fairness for best-effort traffic.
    Type: Grant
    Filed: February 28, 2008
    Date of Patent: January 17, 2012
    Assignee: Polytechnic University
    Inventors: Eiji Oki, Roberto Rojas-Cessa, Hung-Hsiang Jonathan Chao