Queue Content Modification Patents (Class 710/54)
  • Patent number: 7484017
    Abstract: A two-dimensional command block queue includes a plurality of command blocks in a first linked list. One of the command blocks in a string is included in the first linked list. The string is delimited by only a tail pointer stored in a tail pointer list. Following dequeuing the string for processing, a pointer to the one command block of the string that was in the common queue is included in a string head pointer list. The tail pointer to the string is not changed in the tail pointer list following dequeuing of the string. This allows any new SCBs to be appended to the end of the string, while the string is being processed. This allows streaming of new SCBs to an I/O device that had previously been selected and is still connected to the host adapter.
    Type: Grant
    Filed: June 28, 2005
    Date of Patent: January 27, 2009
    Assignee: Adaptec, Inc.
    Inventor: B. Arlen Young
  • Patent number: 7480754
    Abstract: The queue execution mode is selected based on the unique tag that is assigned to the command. In one method embodiment a tag is assigned for each of several disc access commands sent by the host. Two or more queues are created, each having a queue execution mode. Which of the queues is assigned to the command depends on the command's tag. One device embodiment comprises a data storage disc, a memory, and a controller. The memory is configured to hold several pending commands for accessing the disc(s),each of the commands having a unique tag. The controller is configured to execute each queued command according to a mode that is determined base on the command's tag.
    Type: Grant
    Filed: June 27, 2003
    Date of Patent: January 20, 2009
    Assignee: Seagate Technology, LLC
    Inventors: Anthony L. Priborsky, Robert B. Wood
  • Publication number: 20090019196
    Abstract: The present disclosure provides a method for providing Quality of Service (QoS) processing of a plurality of data packets stored in a first memory. The method may include determining a queue of a plurality of queues causing an interrupt using contents of an interrupt status register, the queue comprising address of at least one data packet of the plurality of data packets. The method may further include performing a logical operation between the contents of the interrupt status register and an interrupt mask of a plurality of interrupt masks, the plurality of interrupt masks stored in a second memory. The method may also include processing the plurality of data packets based on the logical operation and incrementing an interrupt mask address pointer stored in a third memory, thereby pointing to another interrupt mask of the plurality of interrupt masks. Of course, many alternatives, variations and modifications are possible without departing from this embodiment.
    Type: Application
    Filed: July 9, 2007
    Publication date: January 15, 2009
    Applicant: INTEL CORPORATION
    Inventors: Yen Hsiang Chew, Shanggar Periaman, Kooi Chi Ooi, Bok Eng Cheah
  • Patent number: 7475170
    Abstract: The present invention is a data transfer device, which comprises an input/output reception buffer, an input/output transmission buffer, a write data buffer, a read data buffer, a control information table, a write data storing process section, a write data transmission section, a read data buffer storing process section, an input/output transmission buffer storing process section and a control section that executes an access control for controlling the access to the memory by the write data transmission section and the read data buffer storing process section based on a control information table; thereby, a configuration optimum for both protocols of the memory bus and the input/output bus is obtained and the out-of-order execution is also achievable.
    Type: Grant
    Filed: July 25, 2005
    Date of Patent: January 6, 2009
    Assignee: Fujitsu Limited
    Inventors: Junichi Inagaki, Masao Koyabu, Jun Tsuiki, Masahiro Kuramoto
  • Publication number: 20090006672
    Abstract: An apparatus and method for tracking coherence event signals transmitted in a multiprocessor system. The apparatus comprises a coherence logic unit, each unit having a plurality of queue structures with each queue structure associated with a respective sender of event signals transmitted in the system. A timing circuit associated with a queue structure controls enqueuing and dequeuing of received coherence event signals, and, a counter tracks a number of coherence event signals remaining enqueued in the queue structure and dequeued since receipt of a timestamp signal. A counter mechanism generates an output signal indicating that all of the coherence event signals present in the queue structure at the time of receipt of the timestamp signal have been dequeued. In one embodiment, the timestamp signal is asserted at the start of a memory synchronization operation and, the output signal indicates that all coherence events present when the timestamp signal was asserted have completed.
    Type: Application
    Filed: June 26, 2007
    Publication date: January 1, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias A. Blumrich, Dong Chen, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Martin Ohmacht, Valentina Salapura, Pavlos Vranas
  • Patent number: 7469309
    Abstract: Methods and apparatus for peer-to-peer data transfers in a computing environment provide configurable control over the number of outstanding read requests by one peer device to another. A requesting peer device includes a control register that stores a high-watermark value associated with requests to a target peer device. Each time a read request to the target peer device is generated, the number of such requests already outstanding is compared to the high-water mark. The request is blocked if the number of outstanding requests exceeds the high-water mark and remains blocked until such time as the number of outstanding requests no longer exceeds the high-water mark. Different high-water marks can be associated with different combinations of requesting and target devices.
    Type: Grant
    Filed: December 12, 2005
    Date of Patent: December 23, 2008
    Assignee: Nvidia Corporation
    Inventors: Samuel Hammond Duncan, Wei-Je Huang, Radha Kanekal
  • Patent number: 7467242
    Abstract: Method and system for a dynamic FIFO flow control circuit. The dynamic FIFO flow control circuit detects one or more obsolete entries in a FIFO memory, retrieves the address of the next valid read pointer, and reads from the retrieved address during the next read operation.
    Type: Grant
    Filed: May 13, 2003
    Date of Patent: December 16, 2008
    Assignee: Via Technologies, Inc.
    Inventor: Hsilin Huang
  • Patent number: 7464201
    Abstract: A memory controller for a wireless communication system comprises a packet buffer write system and a packet buffer read system. The packet buffer write system places packets including packet header and packet data into a packet buffer. The packet buffer read system removes packets including a packet header and packet data from a packet buffer. The packet buffer is arranged into a plurality of packet buffer memory slots, each slot comprising a descriptor status array location including an availability bit set to “used” or “free”, and a packet buffer memory location comprising a descriptor memory slot and a data segment memory slot. The descriptor memory slot includes header information for each packet, and the data segment memory slot includes packet data. The memory controller operates on one or more queues of data, and data is placed into a particular queue in packet memory determined by priority information derived from incoming packet header or packet data.
    Type: Grant
    Filed: May 16, 2007
    Date of Patent: December 9, 2008
    Assignee: Redpine Signals, Inc.
    Inventors: Narasimhan Venkatesh, Satya Rao
  • Patent number: 7461180
    Abstract: Techniques for synchronizing use of buffer descriptors for data, such as packets transmitted over a network, include receiving private index data that indicates a particular buffer descriptor owned by a DMA controller, for moving data between a data port and a corresponding memory buffer. A write command is placed on a memory exchange queue to change the owner to a different processor and the private index data is incremented. A public index is determined, which indicates a different buffer descriptor in which the owner is most recently changed to the processor and is known to be visible to the processor. In response to receiving a request from the processor for the most recent buffer descriptor changed to processor ownership, the public index data is sent to the processor. Based on the public index data, the processor exchanges data with buffer descriptors guaranteed to be owned by the processor.
    Type: Grant
    Filed: May 8, 2006
    Date of Patent: December 2, 2008
    Assignee: Cisco Technology, Inc.
    Inventors: William Lee, Trevor Gamer, Martin Hughes, Dennis Briddell
  • Patent number: 7461284
    Abstract: Disclosed is a method for minimizing the buffer size of an elasticity FIFO queue when synchronizing data between two clock domains. Data communication is typically sent by a transmitter device to a receiver device. The transmitted data signal includes an embedded clock signal and null data characters, as specified by the data communication signal protocol. A null character indicates an empty data frame and is included as part of most standard communication protocols. An embodiment skips one or more null characters from the elasticity FIFO queue during a single clock cycle when it is detected that the write pointer is catching up to the read pointer. By skipping multiple null characters during a single write cycle, the read pointer is moved ahead by one or more queue locations and the write pointer is insured to not catch up to the read pointer for a wider variation in frequencies between a transmitter and receiver than is normally possible.
    Type: Grant
    Filed: June 20, 2005
    Date of Patent: December 2, 2008
    Assignee: LSI Corporation
    Inventors: Timothy D. Thompson, Christopher D. Paulson
  • Patent number: 7457893
    Abstract: A method is disclosed for dynamically selecting software buffers for aggregation in order to optimize system performance. Data to be transferred to a device is received. The data is stored in a chain of software buffers. Current characteristics of the system are determined. Software buffers to be combined are then dynamically selected. This selection is made according to the characteristics of the system in order to maximize performance of the system.
    Type: Grant
    Filed: March 11, 2004
    Date of Patent: November 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: James R. Gallagher, Ron Encarnacion Gonzalez, Binh K. Hua, Sivarama K. Kodukula
  • Patent number: 7457895
    Abstract: An apparatus and method for dynamically allocating memory between inbound and outbound paths of a networking protocol handler so as to optimize the ratio of a given amount of memory between the inbound and outbound buffers is presented. Dedicated but sharable buffer memory is provided for both the inbound and outbound processors of a computer network. Buffer memory is managed so as to dynamically alter what portion of memory is used to receive and store incoming data packets or to transmit outgoing data packets. Use of the present invention reduces throttling of data rate transmissions and other memory access bottlenecks associated with conventional fixed-memory network systems.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: November 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Mark R. Bilak, Robert M. Bunce, Steven C. Parker, Brian J. Schuh
  • Patent number: 7447812
    Abstract: Multi-queue first-in first-out (FIFO) memory devices include multi-port register files that provide write count and read count flow-through when the write and read queues are equivalent. According to some of these embodiments, a multi-queue FIFO memory device includes a write flag counter register file that is configured to support flow-through of write counter updates to at least one read port of the write flag counter register file. This flow-through occurs when an active write queue and an active read queue within the FIFO memory device are the same. A read flag counter register file is also provided, which supports flow-through of read counter updates to at least one read port of the read flag counter register file when the active write queue and the active read queue are the same.
    Type: Grant
    Filed: March 15, 2005
    Date of Patent: November 4, 2008
    Assignee: Integrated Device Technology, Inc.
    Inventors: Jason Zhi-Cheng Mo, Prashant Shamarao, Jianghui Su
  • Patent number: 7447875
    Abstract: A method and system for managing global queues is provided. In one example, a method for implementing a global queue is provided. The queue has a head pointer, a tail pointer, and zero or more elements. The method comprises one or more functions for managing the queue, such as an “add to end” function, an “add to front” function, an “empty queue” function, a “remove from front” function, a “remove specific” function and/or a “lock queue” function. In some examples, the method enables an element to be added to the queue even when the queue is in a locked state.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: November 4, 2008
    Assignee: Novell, Inc.
    Inventor: Dana Henriksen
  • Patent number: 7447805
    Abstract: A buffer chip having a first data interface for receiving a data item which is to be written and for sending a data item which has been read, having a conversion unit for parallelizing the received data item and for serializing the data item which is to be sent, having a second data interface for writing the parallelized data item to a memory arrangement via a memory data bus and for receiving the data item read from the memory arrangement via the memory data bus; having a write buffer storage for buffer-storing the data item which is to be written, having a control unit in order, after reception of a data item which is to be written via the first data interface in line with a write command, to interrupt the data from being written from the write buffer storage via the second data interface upon a subsequent read command.
    Type: Grant
    Filed: March 3, 2004
    Date of Patent: November 4, 2008
    Assignee: Infineon Technologies AG
    Inventors: Georg Braun, Hermann Ruckerbauer
  • Patent number: 7426604
    Abstract: A buffer architecture enables linked lists to be used to administer virtual output queue buffering. The buffer has three random access memories (RAMs). A data RAM holds data. A free RAM holds a linked list of entries defining free space in the data RAM. Destination RAM holds a linked list of entries defining data in the data RAM to be forwarded to a destination.
    Type: Grant
    Filed: June 14, 2006
    Date of Patent: September 16, 2008
    Assignee: Sun Microsystems, Inc.
    Inventors: Hans Olaf Rygh, Finn Egil Hoeyer Grimnes, Brian Edward Manula
  • Patent number: 7412546
    Abstract: A method and structure for determining when a frame of information comprised of one or more buffers of data being transmitted in a network processor has completed transmission is provided. The network processor includes several control blocks, one for each data buffer, each containing control information linking one buffer to another. Each control block has a last bit feature which is a single bit settable to “one or “zero” and indicates the transmission of when the data buffer having the last bit. The last bit is in a first position when an additional data buffer is to be chained to a previous data buffer indicating an additional data buffer is to be transmitted and a second position when no additional data buffer is to be chained to a previous data buffer. The position of the last bit is communicated to the network processor indicating the ending of a particular frame.
    Type: Grant
    Filed: December 27, 2005
    Date of Patent: August 12, 2008
    Assignee: International Business Machines Corporation
    Inventors: Claude Basso, Jean Louis Calvignac, Marco C. Heddes, Joseph Franklin Logan, Fabrice Jean Verplanken
  • Patent number: 7404058
    Abstract: A method and apparatus for enqueuing and dequeuing packets to and from a shared packet memory, while avoiding collisions. An enqueue process or state machine enqueues packets for a communication connection (e.g., channel, queue pair, flow). A dequeue process or state machine operating in parallel dequeues packets and forwards them (e.g., to an InfiniBand node). Packets are stored in the shared packet memory, and status/control information is stored in a control memory that is updated for each packet enqueue and packet dequeue. Prior to updating the packet and/or control memory, each process interfaces with the other to determine if the other process is active and/or to identify the other process' current communication connection. If the enqueue process detects a collision, it pauses (e.g., for a predetermined number of clock cycles). If the dequeue process detects a collision, it selects a different communication connection to dequeue.
    Type: Grant
    Filed: May 31, 2003
    Date of Patent: July 22, 2008
    Assignee: Sun Microsystems, Inc.
    Inventors: John M. Lo, Charles T. Cheng
  • Publication number: 20080133798
    Abstract: A hardware apparatus for receiving a packet for a TCP offload engine (TOE), and receiving system and method using the same are provided. Specifically, information required to protocol processing by a processor is stored in the internal queue included in the packet receiving hardware. Data to be stored in a host memory is transmitted to the host memory after the data is stored in an external memory and protocol processing is performed by the processor.
    Type: Application
    Filed: December 3, 2007
    Publication date: June 5, 2008
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chan Ho PARK, Seong Woon KIM, Myung Joon KIM
  • Patent number: 7366803
    Abstract: A circuit for buffering data is disclosed. The circuit comprises a first circuit which is coupled to receive a stream of data blocks using a first clock signal. The first circuit removes data blocks, such as idle data blocks or a sequence ordered set of a pair of consecutive sequence ordered sets, from the stream of data blocks to create a first modified data stream which is coupled to a memory device. Finally, a second circuit coupled to the memory device generates a second modified data stream using a second clock signal. The second modified data stream preferably comprises the data blocks of the first modified data stream and idle data blocks inserted among the data blocks of the first modified data stream. Methods of buffering data received in a first clock domain and output in a second clock domain are also disclosed.
    Type: Grant
    Filed: February 23, 2005
    Date of Patent: April 29, 2008
    Assignee: Xilinx, Inc.
    Inventors: Justin L. Gaither, Alexander Linn Iles
  • Patent number: 7356624
    Abstract: A circuit for interfacing between a first component 11 operating at a first clock rate and a second component 12 operating at a second clock rate, wherein the second clock rate is higher than the first clock rate. The circuit comprises a first buffer 13 coupled to the first component 11; a second buffer 14 coupled to the second component 12; and a copy/access controller 15, 16, 17 connected to the first buffer 13, the second buffer 14, and the second component 12. The copy/access controller 15, 16, 17 is operable to copy data from the first buffer 13 to the second buffer 14 when the first buffer 13 is substantially full. It is also operable to prompt the second component 12 to access the second buffer 14 when the data is copied from the first buffer 13. The buffers can be random access memories or shift registers, and can be integrated onto the same semiconductor die as either the first or second component.
    Type: Grant
    Filed: March 24, 2000
    Date of Patent: April 8, 2008
    Assignee: Texas Instruments Incorporated
    Inventor: Mandy Mei-Feng Tsai
  • Patent number: 7346715
    Abstract: Loss of data to be transmitted from a peripheral device to a host before a software hierarchy of the host side completely starts is prevented. In a time period before a host completely reached a normal operation mode from a sleep mode, data outputted from a receiver is stored in second buffer memory of a communication control device. When the host reaches the normal operation mode, an application hierarchy in the host transmits a transmission approval command to a control unit, and then the data is transferred from the second buffer memory to first buffer memory. Since communication between the host and the communication control device of the receiver is resumed and then the data stored in the first buffer memory is sent to the host through a USB line, loss of the data can be prevented.
    Type: Grant
    Filed: September 15, 2003
    Date of Patent: March 18, 2008
    Assignee: Alps Electric Co., Ltd
    Inventor: Naoyuki Hatano
  • Patent number: 7330917
    Abstract: Decimation of data from a fixed length queue retaining a representative sample of the old data. Exponential decimation removes every nth sample. Dithered exponential decimation offsets the exponential decimation approach by a probabilistic amount. Recursive decimation selects a portion of the queue and removes elements.
    Type: Grant
    Filed: December 6, 2005
    Date of Patent: February 12, 2008
    Assignee: Agilent Technologies, Inc.
    Inventors: Glenn R Engel, Bruce Hamilton
  • Patent number: 7302503
    Abstract: A direct memory access system utilizing a local memory that stores a plurality of DMA command lists, each comprising at least one DMA command. A command queue can hold a plurality of entries, each entry comprising a pointer field and a sequence field. The pointer field points to one of the DMA command lists. The sequence field holds a sequence value. A DMA engine accesses an entry in the command queue and then accesses the DMA commands of the DMA command list pointed to by the pointer field of the accessed entry. The DMA engine performs the DMA operations specified by the accessed DMA commands. The DMA engine makes available the sequence value held in the sequence field of the accessed entry when all of the DMA commands in the accessed command list have been performed. In one embodiment, the command queue is part of the DMA engine.
    Type: Grant
    Filed: April 1, 2003
    Date of Patent: November 27, 2007
    Assignee: Broadcom Corporation
    Inventor: Alexander G. MacInnis
  • Patent number: 7287102
    Abstract: A storage controller includes a first memory that stores a plurality of data blocks that include first and second noncontiguous data segments. A queue module stores data lengths and data start addresses of the first and second data segments. A read assembly module communicates with the first memory and the queue module, receives a request to read the first and second data segments from a host, reads the plurality of data blocks from the first memory, extracts the first and second data segments from the read plurality of data blocks based on the data lengths and data start addresses after the plurality of data blocks is read from the first memory, and transfers the first and second data segments contiguously to the host.
    Type: Grant
    Filed: January 21, 2004
    Date of Patent: October 23, 2007
    Assignee: Marvell International Ltd.
    Inventors: Theodore C. White, William W. Dennin, Angel G. Perozo
  • Patent number: 7284074
    Abstract: A system and method for operating on data within a network device is described. Between two data operations in a network device is a FIFO queue, which is used to separate the clock domains of the data operations. Data from the first operation is stored in the FIFO queue, which signals an indication to the second operation that there is data in the queue. When the second operation is signaled that there is data in the FIFO queue, it immediately begins reading data from the queue, and begins performing its prescribed operations on the data once it has read enough data from the queue for it to begin operating.
    Type: Grant
    Filed: October 31, 2002
    Date of Patent: October 16, 2007
    Assignee: Force10 Networks, Inc.
    Inventors: Eugene Lee, Cong Ye, Peter Chang, Ajoy Aswadhati
  • Patent number: 7284061
    Abstract: Remotely obtaining exclusive control of a device by remotely establishing communication with the device over a network, requesting to obtain remote exclusive control of the device's capabilities, and determining whether remote exclusive control of the device's capabilities can be obtained based on whether or not another user already has exclusive control of the device's capabilities. In a first case where it is determined that remote exclusive control can be obtained, authenticating a user requesting to obtain remote exclusive control of the device's capabilities, providing the user remote exclusive control of the device's capabilities after the user has been authenticated, and temporarily deferring requests by users other than the user who has obtained remote exclusive control to perform operations utilizing the device's capabilities during a period in which the user maintains remote exclusive control of the device's capabilities.
    Type: Grant
    Filed: November 13, 2001
    Date of Patent: October 16, 2007
    Assignee: Canon Kabushiki Kaisha
    Inventors: Don Hideyasu Matsubayashi, Craig Mazzagatte, Royce E Slick
  • Patent number: 7281086
    Abstract: A mixed queue method for managing storage requests directed includes a low-priority request queue on which all low-priority requests are placed and where they are subject to throughput optimization by re-ordering. When a high-priority request limit has not been reached, high-priority requests are placed on a high-priority request queue where they are executed in a pre-emptive manner with respect to the queued low-priority requests, thus experiencing reduced access time. When the high-priority request limit has been reached, the high-priority requests are placed on the low-priority request queue, such that the high-priority requests are included in the throughput optimization along with the low-priority requests on the request queue. Starvation of the low-priority requests is avoided, and the overall throughput of the disk drive is maintained at a relatively high level.
    Type: Grant
    Filed: June 2, 2005
    Date of Patent: October 9, 2007
    Assignee: EMC Corporation
    Inventors: Sachin Suresh More, Yechiel Yochai, Amnon Naamad, Adnan Sahin
  • Patent number: 7266621
    Abstract: A device is presented including a host controller. A host controller driver is connected to the host controller. The host controller arranges queue element transfer descriptors (qTDs) in a circularly linked order. Also presented is a method including determining whether execution of a first queue element transfer descriptor (qTD) in a first bank including many qTDs results in a short packet condition. Following an alternate pointer in the first bank that points to a second bank if execution of the first qTD resulted in the short packet condition. Following a next pointer to a second qTD in the first bank if the execution of the first qTD completed normally. Also executing the second qTD in the first bank. The qTDs in the first bank and the second bank are circularly linked.
    Type: Grant
    Filed: October 31, 2003
    Date of Patent: September 4, 2007
    Assignee: Intel Corporation
    Inventor: Brian A. Leete
  • Patent number: 7266650
    Abstract: A method, apparatus, and computer program product are provided for implementing an enhanced circular queue using loop counts for command processing. A circular queue includes a plurality of entries for storing commands. As command entries are added to the queue at the head of the queue, a head loop count is stored with each command entry. A head pointer is updated to the head of the queue. When the head pointer wraps from a last queue entry to a first queue entry, the head loop count is incremented. A tail pointer points to an oldest command entry, and is updated when the oldest command entry is executed. When the tail pointer advances and wraps from a last queue entry to a first queue entry, the tail pointer loop count is incremented.
    Type: Grant
    Filed: November 12, 2004
    Date of Patent: September 4, 2007
    Assignee: International Business Machines Corporation
    Inventors: Paul Allen Ganfield, Lonny Lambrecht
  • Patent number: 7254651
    Abstract: A scheduler configured to schedule multiple channels of a Direct Memory Access (DMA) device includes a shift structure having entries corresponding to the multiple channels to be scheduled. Each entry in the shift structure includes multiple fields. Each entry also includes a weight that is determined based on these multiple fields. The scheduler also includes a comparison-logic circuit that is configured to then sort the entries based on their respective weights.
    Type: Grant
    Filed: December 18, 2000
    Date of Patent: August 7, 2007
    Assignee: Redback Networks Inc.
    Inventors: Ranjit J. Rozario, Ravikrishna Cherukuri
  • Patent number: 7254654
    Abstract: A data transfer device is disclosed for writing data to and reading data from a disk drive system through a plurality of ports of the data transfer device. The data transfer device includes a first buffer for serially receiving, from a host system, control portions of data read requests and data write transfers; a second buffer for serially receiving, from the host system, data portions of data write transfers received by the first buffer; and N temporary storage devices, wherein N is a positive integer, coupled to the first buffer and the second buffer, the N temporary storage devices for parallelly receiving and temporarily storing consecutive control portions of the data read transfers and data write transfers from the first buffer. Up to N of the data read transfers and data write transfers are transferred to the disk drive system through the plurality of ports simultaneously.
    Type: Grant
    Filed: April 1, 2004
    Date of Patent: August 7, 2007
    Assignee: EMC Corporation
    Inventors: Almir Davis, Christopher S. MacLellan
  • Patent number: 7249206
    Abstract: An apparatus and method for dynamically allocating memory between inbound and outbound paths of a networking protocol handler so as to optimize the ratio of a given amount of memory between the inbound and outbound buffers is presented. Dedicated but sharable buffer memory is provided for both the inbound and outbound processors of a computer network. Buffer memory is managed so as to dynamically alter what portion of memory is used to receive and store incoming data packets or to transmit outgoing data packets. Use of the present invention reduces throttling of data rate transmissions and other memory access bottlenecks associated with conventional fixed-memory network systems.
    Type: Grant
    Filed: July 8, 2004
    Date of Patent: July 24, 2007
    Assignee: International Business Machines Corporation
    Inventors: Mark R. Bilak, Robert M. Bunce, Steven C. Parker, Brian J. Schuh
  • Patent number: 7246182
    Abstract: Multiple non-blocking FIFO queues are concurrently maintained using atomic compare-and-swap (CAS) operations. In accordance with the invention, each queue provides direct access to the nodes stored therein to an application or thread, so that each thread may enqueue and dequeue nodes that it may choose. The prior art merely provided access to the values stored in the node. In order to avoid anomalies, the queue is never allowed to become empty by requiring the presence of at least a dummy node in the queue. The ABA problem is solved by requiring that the next pointer of the tail node in each queue point to a “magic number” unique to the particular queue, such as the pointer to the queue head or the address of the queue head, for example. This obviates any need to maintain a separate count for each node.
    Type: Grant
    Filed: October 15, 2004
    Date of Patent: July 17, 2007
    Assignee: Microsoft Corporation
    Inventors: Alessandro Forin, Andrew Raffman
  • Patent number: 7240156
    Abstract: The present invention partitions a cache region of a storage subsystem for each user and prevents interference between user-dedicated regions. A plurality of CLPR can be established within the storage subsystem. A CLPR is a user-dedicated region that can be used by partitioning the cache region of a cache memory. Management information required to manage the data stored in the cache memory is allocated to each CLPR in accordance with the attribute of the segment or slot. The clean queue and clean counter, which manage the segments in a clean state, are provided in each CLPR. The dirty queue and dirty counter are used jointly by all the CLPR. The free queue, classification queue, and BIND queue are applied jointly to all the CLPR, only the counters being provided in each CLPR.
    Type: Grant
    Filed: January 10, 2006
    Date of Patent: July 3, 2007
    Assignee: Hitachi, Ltd.
    Inventors: Sachiko Hoshino, Takashi Sakaguchi, Yasuyuki Nagasoe, Shoji Sugino
  • Patent number: 7234026
    Abstract: A media player and a method for operating a media player are disclosed. A media program is able to substantially immediately begin playing after a media play selection has been made. Through intelligent operation, the media program is able to start playing even before the media program has been substantially or completely loaded from disk storage into semiconductor memory (i.e., cache memory). Additionally, the media program can be loaded into semiconductor memory through use of a background process without disturbing the playing of the media program. Further, if desired, the disk storage is able to be aggressively “powered off” when not being accessed, thereby enhancing battery life when being battery-powered.
    Type: Grant
    Filed: May 17, 2005
    Date of Patent: June 19, 2007
    Assignee: Apple Inc.
    Inventors: Jeffrey L. Robbin, Ned K. Holbrook, Steven Bollinger
  • Patent number: 7228362
    Abstract: Various embodiments of the invention relate to an apparatus and method for efficiently implementing out-of-order servicing of read requests originating from an input/output (I/O) interface with minimal additional storage. For example, a number of read entries may be generated from data read requests stored in a first-in-first-out in a first order. The read entries are stored in a storage device and each read entry identifies internal data reads to read data to service the data read request to which the read entry corresponds. A controller coupled to the storage structure may then submit the internal data reads a central arbiter to read data in a second order that is different than the first order. Moreover, the controller also allows the second order to include internal data reads from one read entry, before a completing servicing of another partially serviced read entry, thus providing “simultaneous” servicing of several read entries.
    Type: Grant
    Filed: March 31, 2004
    Date of Patent: June 5, 2007
    Assignee: Intel Corporation
    Inventors: Michelle C. Jen, Debendra Das Sharma
  • Patent number: 7219127
    Abstract: In a real-time collaboration server, a control unit manages a collaboration mode. The control unit operates a virtual client that maintains a virtual screen reflecting the status of the collaboration (e.g., the contents of a shared desktop or whiteboard). The virtual client renders collaboration data within the virtual screen. New clients are synchronized with an ongoing collaboration by packing and sending them a copy of the virtual screen. The control unit maintains a queue of collaboration data to be sent to participating clients. Each client may have a pointer identifying the queued data it is processing. The queue may be collapsed (e.g., when it reaches a maximum size) by sending a copy of the virtual screen to one or more clients that have not yet consumed old data in the queue; those clients are then updated to skip the queue entries embodied in the virtual screen.
    Type: Grant
    Filed: March 13, 2003
    Date of Patent: May 15, 2007
    Assignee: Oracle International Corporation
    Inventors: Paul Huck, Aleksey Skurikhin, Ilya Teplov
  • Patent number: 7213138
    Abstract: A data transmission system where an image providing device and a printer are directly connected by a 1394 serial bus, a command is sent from the image providing device to the printer, then a response to the command is returned from the printer to the image providing device. Image data is sent from the image providing device to the printer based on information included in the response. The printer converts the image data outputted from the image providing device into print data. Thus, printing can be performed without a host computer by directly connecting the image providing device and the printer by the 1394 serial bus or the like.
    Type: Grant
    Filed: February 17, 1998
    Date of Patent: May 1, 2007
    Assignee: Canon Kabushiki Kaisha
    Inventors: Koji Fukunaga, Naohisa Suzuki, Kiyoshi Katano, Jiro Tateyama, Atsushi Nakamura, Makoto Kobayashi
  • Patent number: 7209984
    Abstract: A printing system includes a printer and computers connected by an network. A mail server is provided for transmitting notification data from the printer to the computers in the form of an email message. Print data stored in the printer is erased once the print data has been used for reprint operations a certain amount of times. If the amount of print data stored in the printer exceeds a reference size, then print data sets are selected for erasure based on predetermined conditions. Data stored in the printer can be retrieved from the printer and amended at the computer.
    Type: Grant
    Filed: March 10, 2004
    Date of Patent: April 24, 2007
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventor: Yoshiko Orito
  • Patent number: 7200696
    Abstract: A method and structure for determining when a frame of information comprised of one or more buffers of data being transmitted in a network processor has completed transmission is provided. The network processor includes a plurality of control blocks, one for each data buffer, each containing control information to link one buffer to another for transmission. Each of the control blocks has a last bit feature which is a single bit and indicates when the data buffer having the last bit is transmitted. This last bit feature is a bit which can be set to either zero or one. The last bit feature is in a first position when an additional data buffer is to be chained to a previous data buffer indicating an additional data buffer is to be transmitted and a second position when no additional data buffer is to be chained to a previous data buffer.
    Type: Grant
    Filed: April 6, 2001
    Date of Patent: April 3, 2007
    Assignee: International Business Machines Corporation
    Inventors: Claude Basso, Jean Louis Calvignac, Marco C. Heddes, Joseph Franklin Logan, Fabrice Jean Verplanken
  • Patent number: 7194567
    Abstract: A bus bridge for coupling between a first bus and a second bus includes: multiple ticket registers; a ticket dispenser counter; and a ticket call counter. The ticket dispenser counter dispenses a ticket value to a request received at the bridge from the first bus for access to the second bus. This ticket value is held in one ticket register of the multiple ticket registers. The ticket call counter provides ticket call values, and the request is granted access to the second bus when a current ticket call value equals the ticket value dispensed to the request. While the request waits for access to the second bus, the bus bridge can perform work on the request. When request coherency is maintained employing snooping, ticket values assigned to a plurality of requests maintain a snoop response ordering of the requests for access to the second bus.
    Type: Grant
    Filed: February 24, 2005
    Date of Patent: March 20, 2007
    Assignee: International Business Machines Corporation
    Inventors: Clarence R. Ogilvie, Charles S. Woodruff
  • Patent number: 7191287
    Abstract: A hybrid-type storage system having both SAN and NAS interfaces can be implemented by simple hardware capable of carrying out a SAN function independently of a NAS function and a NAS load. To be more specific, a controller of the storage system comprises a NAS controller for accepting an I/O command issued for a file unit and a SAN controller for accepting an I/O command issued for a block unit. The NAS controller converts an I/O command issued for a file unit into an I/O command issued for a block unit, and transfers the I/O command issued for a block unit to the SAN controller. The SAN controller makes an access to data stored in a disk apparatus in accordance with an I/O command received from the SAN or from the NAS controller as a command issued for a block unit. The NAS and SAN controllers are capable of operating independently of each other.
    Type: Grant
    Filed: September 8, 2006
    Date of Patent: March 13, 2007
    Assignee: Hitachi, Ltd.
    Inventors: Yusuke Nonaka, Naoto Matsunami, Ikuya Yagisawa, Akira Nishimoto
  • Patent number: 7188198
    Abstract: A method, apparatus and computer program product are provided for implementing dynamic Virtual Lane buffer reconfiguration in a channel adapter. A first register is provided for communicating an adapter buffer size and allocation capability for the channel adapter. At least one second register is provided for communicating a current port buffer size and one second register is associated with each physical port of the channel adapter. A plurality of third registers is provided for communicating a current VL buffer size, and one third register is associated with each VL of each physical port of the channel adapter. The second register is used for receiving change requests for adjusting the current port buffer size for an associated physical port. The third register is used for receiving change requests for adjusting the current VL buffer size for an associated VL.
    Type: Grant
    Filed: September 11, 2003
    Date of Patent: March 6, 2007
    Assignee: International Business Machines Corporation
    Inventors: Bruce Leroy Beukema, Ronald Edward Fuhs, Calvin Charles Paynton, Steven Lyn Rogers, Bruce Marshall Walk
  • Patent number: 7181573
    Abstract: In response to receiving a request to perform an enqueue or dequeue operation a corresponding queue descriptor specifying the structure of the queue is referenced to execute the operation. The queue descriptor is stored in a processor's memory controller logic.
    Type: Grant
    Filed: January 7, 2002
    Date of Patent: February 20, 2007
    Assignee: Intel Corporation
    Inventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein
  • Patent number: 7162512
    Abstract: Guaranteed, exactly once delivery of messages is disclosed. In one embodiment, there is a sender and a receiver. In a sender transaction, the sender does the following: receives a message from a sender queue; generates a substantially unique identifier and an expiration time for the message; and, saves the identifier, the expiration time, and the message in a sender database. The sender then sends the identifier, the expiration time, and the message to the receiver. In a receiver transaction, the receiver then does the following: receives the identifier, the expiration time, and the message from a receiver queue; determines whether the message has expired based on the expiration time and determines whether the message is present in a receiver database by its identifier; and, upon determining that the message has not expired and is not present in the receiver database, stores the message in the receiver database, and performs actions associated with the message.
    Type: Grant
    Filed: February 28, 2000
    Date of Patent: January 9, 2007
    Assignee: Microsoft Corporation
    Inventors: Neta Amit, Alexander Frank, Yifat Peled
  • Patent number: 7158244
    Abstract: A method of managing a queue of print jobs in a printer is disclosed, wherein the jobs are created by specifying print data and print parameters for each job, and the jobs are put into the print queue, and wherein, before print processing of a job in the queue begins, a start condition for the job is checked and printing is started only when the start condition is fulfilled. The method includes steps of checking a status of mode indicator specifying whether the printer is in a “keep going” mode or a “keep sequence” mode; and, when a job in the queue is reached for which the start condition is not fulfilled, postponing print processing of this job and proceeding with a next job, if any, for which the start condition is fulfilled, if the printer is in the “keep going” mode, or stopping print processing, if the printer is in the “keep sequence” mode.
    Type: Grant
    Filed: March 19, 2002
    Date of Patent: January 2, 2007
    Assignee: Océ-Technologies B.V.
    Inventors: Monique Gerardine Miranda Sommer, Johannes Hubertus Theodorus Peters, Frederik de Jong, Louis Anna Jozef Dohmen, Johannes Josephus Maria Goossens, Pieter Berend Johannes Deen, Veronika Toumanova
  • Patent number: 7155570
    Abstract: In one embodiment, a trace buffer circuit for use with a pipelined digital signal processor (DSP) may include a series of interconnected registers that operate as a first-in first-out (FIFO) register on a write operation and a last-in first-out (LIFO) register on a read operation. On the write operation, a branch target/source address pair may be written to a first pair of trace buffer registers and, the contents of each register may be shifted two registers downstream. On the read operation, one instruction address may be read from a top register, and the contents of each register may be shifted one register upstream. The trace buffer may also include structure to enable compression of hardware and software loops in the program flow. A valid bit may be assigned to each instruction address in the trace buffer and a valid bit buffer with a structure parallel to that of the trace buffer may be provided to track the valid bits.
    Type: Grant
    Filed: September 29, 2000
    Date of Patent: December 26, 2006
    Assignees: Intel Corporation, Analog Devices, Inc.
    Inventors: Ravi P. Singh, Charles P. Roth, Gregory A. Overkamp
  • Patent number: 7143317
    Abstract: A service processor for a server system includes an event log that, once full, stores recent events by overwriting events of intermediate age so that the information required to diagnose both cascade errors and hangs are preserved. This contrasts with bottom-up buffers that discard recent events when full and with circular buffers that discard the oldest events when full. The event log can be reset by moving an exception region, that is, a region that is not overwritten by recent events. Alternatively, a partial reset can initialize an exception region (e.g., a bottom-up sublog), while a circular region or sublog continues to operate without being reset.
    Type: Grant
    Filed: June 4, 2003
    Date of Patent: November 28, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen B. Lyle, Paul Henry Bouchier
  • Patent number: 7130936
    Abstract: In summary, one aspect of the present invention is directed to a method for a shared memory queue to support communicating between computer processes, such as an enqueuing process and a dequeuing process. A buffer may be allocated including at least one element having a data field and a reserve field, a head pointer and a tail pointer. The enqueuing process may enqueue a communication into the buffer using mutual exclusive access to the element identified by the head pointer. The dequeuing process may dequeue a communication from the buffer using mutual exclusive access to the element identified by the tail pointer. Mutual exclusive access to said head pointer and tail pointer is not required. A system and computer program for a shared memory queue are also disclosed.
    Type: Grant
    Filed: April 28, 2003
    Date of Patent: October 31, 2006
    Assignee: Teja Technologies, Inc.
    Inventors: Mandeep S. Baines, Shamit D. Kapadia, Akash R. Deshpande