Queue Content Modification Patents (Class 710/54)
-
Patent number: 6862630Abstract: A transmission circuit for transmitting data of varying priorities on a network medium is provided. The transmission circuit includes sub-circuits to receive and store data frames into random access memory frame buffers and priority tables. Sub-circuit priority resolution selects the highest priority frame, and sub-circuit frame transmission transmits the frame to a media access controller to be made available by the network medium.Type: GrantFiled: August 23, 2000Date of Patent: March 1, 2005Assignee: Advanced Micro Devices, Inc.Inventors: Atul Garg, Yatin Acharya
-
Patent number: 6859851Abstract: Methods and apparatus control the loading of a memory buffer. The memory buffer may have a watermark with a first watermark value and can receive an advance indication of a memory service interruption. Based at least in part on the received advance indication of the memory service interruption, the watermark can be modified to have a second watermark value different from the first watermark value.Type: GrantFiled: December 20, 1999Date of Patent: February 22, 2005Assignee: Intel CorporationInventor: Gad S. Sheaffer
-
Patent number: 6842800Abstract: A buffer storage system is provided for storing groupings of data of varying size. The buffer storage system comprises a buffer storage section and a buffer management section. The buffer storage section has a first buffer subsection and a second buffer subsection. The first buffer subsection includes a plurality of buffer units of a first buffer unit size. The second buffer subsection includes a plurality of buffer units of a second buffer unit size wherein the second buffer unit size is larger than the first buffer unit size. The buffer management section is operable to determine the size of an incoming data grouping and to direct the incoming data grouping to one of the buffer subsections based on the size of the data grouping.Type: GrantFiled: August 30, 2001Date of Patent: January 11, 2005Assignee: Marconi Intellectual Property (Ringfence) Inc.Inventor: Jean-Lou Dupont
-
Patent number: 6839804Abstract: Disk array storage device apparatus for enhancing the performance of an application on a data processing system that operates with a disk array storage device in which the completion of tasks associated with different transactions with one logical storage device are conditions precedent to the completion of other transactions. Specific tasks related to the one logical device are given priority over tasks related to all other logical storage devices. In a specific implementation reconnect tasks are given the highest priority with reconnect tasks from the one logical storage device being given the highest priority. A second category of tasks related to the one logical storage device can be given priority over all other tasks except reconnect tasks. All other tasks are given a priority below that of the first and second task categories.Type: GrantFiled: October 6, 2003Date of Patent: January 4, 2005Assignee: EMC CorporationInventors: Arieh Don, Natan Vishlitzky, Alexandr Veprinsky
-
Patent number: 6836785Abstract: The present invention provides a throttling system that can throttle incoming requests to a server that includes a variable sized buffer for holding incoming calls prior to processing by the server. The number of requests that are held in a queue by the buffer can be dependent on the overload status of the server. If the server is not overloaded, the number of requests that are held in the buffer can be large, such as the full capacity of the buffer. Alternatively, if the server is overloaded for a predetermined amount of time, then the number of requests that are held in the buffer can be decreased, such as to only a portion of the full capacity of the buffer. Any requests that arrive at the buffer once the buffer is at its capacity can be discarded or blocked. Accordingly, a reduction of the buffer size in a overloaded state results in a superior delay performance without increased request blocking of the processor.Type: GrantFiled: November 22, 2000Date of Patent: December 28, 2004Assignee: AT&T Corp.Inventors: Yury Bakshi, Carolyn R. Johnson
-
Patent number: 6832274Abstract: Method and apparatus are described that translate addresses of transactions. A first interface may receive a first address portion of a first transaction and a first address portion of a second transaction. The first address portion may be translated to a second address portion prior to receiving all portions of the first transaction. The first address portion of the second transaction may be translated to a second address portion prior to receiving all portions of the first transaction.Type: GrantFiled: June 2, 2003Date of Patent: December 14, 2004Assignee: Intel CorporationInventors: Eric J. Dahlen, Hidetaka Oki
-
Patent number: 6813658Abstract: A dynamic data queuing mechanism for network packets is disclosed. A three-dimensional coil may be expanded or contracted in length. In addition, the size of each loop of the three-dimensional coil may be adjusted. Moreover, simple circular queue and dynamic buffer management techniques are combined to implement circular queues that may be adjusted in size. Size adjustment, in turn, causes an entire queue either to expand or contract. Circular queue size is changed dynamically, without any copying or moving of queue data. This advantage is attained with little overhead added to conventional circular queues, and is useful in reducing memory requirements for simple circular queues by adjusting queue size as needs change. This is particularly useful for multiple queues that share the same memory space.Type: GrantFiled: March 27, 2002Date of Patent: November 2, 2004Assignee: Intel CorporationInventors: Siu H Lam, Kai X Miao
-
Patent number: 6807588Abstract: A sectioned ordered queue in an information handling system comprises a plurality of queue sections arranged in order from a first queue section to a last queue section. Each queue section contains one or more queue entries that correspond to available ranges of real storage locations and are arranged in order from a first queue entry to a last queue entry. Each queue section and each queue entry in the queue sections having a weight factor defined for it. Each queue entry has an effective weight factor formed by combining the weight factor defined for the queue section with the weight factor defined for the queue entry. A new entry is added to the last queue section to indicate a newly available corresponding storage location, and one or more queue entries are deleted from the first section of the queue to indicate that the corresponding storage locations are no longer available.Type: GrantFiled: February 27, 2002Date of Patent: October 19, 2004Assignee: International Business Machines CorporationInventors: Tri M. Hoang, Tracy D. Butler, Danny R. Sutherland, David B. Emmes, Mariama Ndoye, Elpida Tzortzatos
-
Patent number: 6802027Abstract: To provide an electric or electronic circuit arrangement as well as a method of detecting and/or identifying and/or recording at least an access violation, particularly at least a memory access violation, in a microcontroller provided particularly for a chip card or smart card, with which the source causing this access violation (referred to as break source) as well as the code address occurring upon this violation can be detected and/or identified and/or recorded when an access violation occurs during the program run, the circuit arrangement comprises at least a memory unit; at least an interface unit assigned to the memory unit; at least a processor unit connected to the memory unit particularly via the interface unit for executing instruction codes.Type: GrantFiled: February 19, 2002Date of Patent: October 5, 2004Assignee: Koninklijke Philips Electronics N.V.Inventors: Wolfgang Buhr, Detlef Mueller, Dieter Hagedorn
-
Patent number: 6799229Abstract: A system which includes a DMA (Direct Memory Access) interface and a MAC (Media Access Control) interface. A data FIFO and data burst information FIFO are disposed between the DMA interface and the MAC interface, and the system is configured to provide that information contained in the data burst information FIFO is used to discard unwanted data contained in the data FIFO, such that the unwanted data does not forward to the DMA interface. This facilitates fast and efficient data transfer, and avoids wasting (i.e. optimizes) DMA bandwidth. Additionally, this avoids or at least reduces the likelihood of FIFO overflow.Type: GrantFiled: September 5, 2000Date of Patent: September 28, 2004Assignee: LSI Logic CorporationInventor: Liang-i Lin
-
Patent number: 6789143Abstract: A distributed computing system having (host and I/O) end nodes, switches, routers, and links interconnecting these components is provided. The end nodes use send and receive queue pairs to transmit and receive messages. The end nodes use completion queues to inform the end user when a message has been completely sent or received and whether an error occurred during the message transmission or reception process. A mechanism implements these queue pairs and completion queues in hardware. A mechanism for controlling the transfer of work requests from the consumer to the CA hardware and work completions from the CA hardware to the consumer using head and tail pointers that reference circular buffers is also provided. The QPs and CQs do not contain Work Queue Entries and Completion Queue Entries respectively, but instead contain references to these entries.Type: GrantFiled: September 24, 2001Date of Patent: September 7, 2004Assignee: International Business Machines CorporationInventors: David F. Craddock, Thomas Anthony Gregg, Ian David Judd, Gregory Francis Pfister, Renato John Recio, Donald William Schmidt
-
Patent number: 6782461Abstract: Dynamically adjustable load-sharing circular queues are disclosed. That is, a method of reversing the processing order of a second queue placed adjacent to a first queue to allow the space allocated to both queues to be dynamically adjusted, without copying or moving queue data or affecting the performance of the input and output queue functions is described. These advantages are attained without adding any overhead to conventional circular queues, in terms of processing and memory requirements. Dynamically adjustable circular queues are particularly useful in reducing memory requirements for simple circular queues used in serving either a primary/backup or load-sharing configuration of two input queues. A simple way of determining when the queue sizes can be adjusted is further described.Type: GrantFiled: February 25, 2002Date of Patent: August 24, 2004Assignee: Intel CorporationInventor: Siu H Lam
-
Patent number: 6782484Abstract: A system and method are disclosed for power management that reduces computer power consumption by causing a low power state (‘suspend’ mode) to be entered by specific, peripheral device-related computer components, wherein the components enter the suspend mode after a short period of peripheral device inactivity and are able to resume to an ‘active’ mode quickly without losing information entered into the peripheral device during the transition from the suspend mode to the active mode. To prevent loss of information during the transition, a peripheral memory device is utilized to store the information inputted during the transition and to deliver the information upon reaching the active mode.Type: GrantFiled: December 22, 2000Date of Patent: August 24, 2004Assignee: Intel CorporationInventors: Steve B. McGowan, John I. Garney
-
Patent number: 6782535Abstract: The present invention provides a distributed computing system and method for efficiently utilizing system resources with a variable width queue to handle resource contention. The present invention varies the width of the queue of active tasks such that as a resource becomes idle, a new tasks is added to the queue thereby incrementing the width of the queue and fully utilizing system resources.Type: GrantFiled: August 30, 2000Date of Patent: August 24, 2004Assignee: Creo Inc.Inventors: Robert Dal-Santo, Lawrence H. Croft
-
Patent number: 6779084Abstract: The use of enqueue operations to append multi-buffer packets to the end of a queue includes receiving a request to place a string of linked buffers in a queue, specifying a first buffer in the string and a queue descriptor associated with the first buffer in the string, updating the buffer descriptor that points to the last buffer in the queue to point to the first buffer in the string, and updating a tail pointer to point to the last buffer in the string.Type: GrantFiled: January 23, 2002Date of Patent: August 17, 2004Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein
-
Patent number: 6775722Abstract: An architecture for data retrieval from a plurality of coupling queues. At least first and second data queues are provided for receiving data thereinto. The data is read from the at least first and second data queues with reading logic, the reading logic reading the data according to a predetermined queue selection algorithm. The data read from by reading logic and forwarded to an output queue.Type: GrantFiled: July 5, 2001Date of Patent: August 10, 2004Assignee: Zarlink Semiconductor V. N. Inc.Inventors: David Wu, Jerry Kuo
-
Patent number: 6772247Abstract: A circuit that merges and aligns data that resides in a buffer entry is described. The data residing in the buffer entry is divided into a prepend portion and a payload portion. The prepend and the payload portions of the data are each defined, in part, by a length and an offset. Given the lengths and offsets, the circuit fetches the data from the buffer entry, merges the data, and aligns the data.Type: GrantFiled: November 19, 2002Date of Patent: August 3, 2004Assignee: Intel CorporationInventor: Raymond Ng
-
Patent number: 6772297Abstract: To perform a data replace control activated prior to the execution of a cache memory reference instruction so as to reduce the latency when a miss occurs to a cache memory. In a cache replace control of a load store unit, a load store unit controlling device comprises a first queue selection logical circuit 41, a second queue selection logical circuit 42 and a mediating unit 43, wherein the first queue selection logical circuit sequentially selects access instructions to access the cache memory which are stored in queues 31, wherein the second queue selection logical circuit selects unissued access instructions of the access instructions to access the cache memory which are stored in the queues prior to the selections by the first queue selection logical circuit, and wherein the mediating unit mediates between the access instructions selected by the first queue selection logical circuit and the pre-access instructions selected by the second queue selection logical circuit for accessing the cache memory.Type: GrantFiled: March 27, 2001Date of Patent: August 3, 2004Assignee: Fujitsu LimitedInventor: Toshiyuki Muta
-
Patent number: 6769027Abstract: The invention is directed to an architecture that includes a first queue 128, a first queue reader 148 that reads data entities in the first queue, and a first queue writer 124 that writes data entities to the first queue. The architecture is able to periodically set one or more backup administration parameters equal to administration parameter(s) associated with the first queue 128 and, when the first queue 128 is restored, copy data entities from backup storage to the first queue 128 and set the administration parameter(s) to the corresponding backup administration parameter(s).Type: GrantFiled: January 31, 2000Date of Patent: July 27, 2004Assignee: Avaya Technology Corp.Inventors: Robert William Gebhardt, Barbara Patrice Havens
-
Patent number: 6763405Abstract: In order to enable interfacing of a microprocessor (1) with a peripheral (3) consisting of a device operating according to high-speed communication specifications (for example, IEEE 1394), it is envisaged that the interface (4) should contain a dedicated memory (40) designed to smooth the delays in communication between the main memory (2) and the peripheral (3). The memory (40) has a trigger (10) that is programmable via software to start a communication when a fraction of the memory (40) or the entire memory (40) is full. When a multiple packet starts to be transferred, a signal is generated to alert the microprocessor (1) of the fact that a transfer is almost completed.Type: GrantFiled: October 1, 2001Date of Patent: July 13, 2004Assignee: STMicroelectronics S.r.l.Inventors: Michele Sardo, Rosario Miritello
-
Patent number: 6757679Abstract: An electronic queue management system for implementation on a chip. The queue management system comprises a plurality of primitive queue elements each including a register for a next-pointer and a register for a queue number. The next-pointer values may be selected via a register input and can be fed out via a registered output. Such queue elements are associated with a respective entry in a central array which stores the data belonging to the actual request. The separation of the data array and queue elements facilitates queue management as the data amounts are quite large compared to the small amount of data being required for the pre logic of the queue management system. Multiple add requests and multiple remove requests operations for different queue elements may be concurrently achieved in a single cycle.Type: GrantFiled: May 11, 2000Date of Patent: June 29, 2004Assignee: International Business Machines CorporationInventor: Rolf Fritz
-
Patent number: 6757756Abstract: The present invention relates to a queuing system, implemented in the memory of a computer by the execution of a program element. The queuing system includes a queue with a plurality of memory slots, a write pointer and a read pointer. The write pointer permits to enqueue data elements in successive memory slots of the queue. The read pointer permits to dequeue data elements from the queue memory slots for processing, where these data elements are potentially non-dequeuable. Upon identifying a non-dequeuable data element in a particular memory slot of the queue, the read pointer is capable to skip over the particular memory slot and move on to a successive memory slot.Type: GrantFiled: March 19, 2003Date of Patent: June 29, 2004Assignee: Nortel Networks LimitedInventors: Stephen Lanteigne, David Lewis
-
Patent number: 6757791Abstract: A method and system for reordering data units that are to be written to, or read from, selected locations in a memory are described herein. The data units are reordered so that an order of accessing memory is optimal for speed of reading or writing memory, not necessarily an order in which data units were received or requested. Packets that are received at input interfaces are divided into cells, with cells being allocated to independent memory banks. Many such memory banks are kept busy concurrently, so cells (and thus the packets) are read into the memory as rapidly as possible. The system may include an input queue for receiving data units in a first sequence and a set of storage queues coupled to the input queue for receiving data units from the input queue. The data units may be written from the storage queues to the memory in an order other than the first sequence.Type: GrantFiled: March 30, 1999Date of Patent: June 29, 2004Assignee: Cisco Technology, Inc.Inventors: Robert O'Grady, Sonny N. Tran, Yie-Fong Dan, Bruce Wilford
-
Patent number: 6754741Abstract: A FIFO buffer arrangement is disclosed that is capable of buffering and transferring data between multiple input and output datapaths of varying widths. All of the input and output buses may be used to transfer data concurrently. Data that are written to the FIFO via any of the input buses may be extracted from the FIFO via any of the output buses. The FIFO efficiently carries out all necessary width conversions when performing the data transfers.Type: GrantFiled: May 10, 2001Date of Patent: June 22, 2004Assignee: PMC-Sierra, Inc.Inventors: Thomas Alexander, David Wong
-
Patent number: 6754742Abstract: The invention relates to a buffer memory, method and a buffer controller for queue management usable in an ATM switch. An object of the invention is to achieve a high frequency throughput of data cells in the buffer memory. This object is achieved by using a buffer memory which is organized as 256*(424+8) SRAM-cells. The memory is used for holding ten queues, one for each incoming channel and two free-queues containing idle cells.Type: GrantFiled: October 27, 1999Date of Patent: June 22, 2004Assignee: SwitchCore ABInventors: Jonas Alowersson, Per Andersson, Bertil Roslund, Patrik Sundström
-
Publication number: 20040111543Abstract: A device is presented including a host controller capable of attaching a quantity of queue heads to a frame list. The quantity of queue heads are attached to the frame list before any transaction descriptors where split-isochronous transaction descriptors are supported.Type: ApplicationFiled: December 2, 2003Publication date: June 10, 2004Inventors: Brian A. Leete, John I. Garney
-
Publication number: 20040105123Abstract: Systems for accessing information corresponding to print jobs are provided. An exemplary system includes a print preview system that operates in conjunction with a printing device. Specifically, the printing device includes a display device and stores information corresponding to a print job in memory. The print preview system is operative to receive information corresponding to a request to preview at least a portion of the print job. The print preview system also is operative to access the information corresponding to the print job, and display a thumbnail graphical representation corresponding to the portion of the print job requested via the display device. Methods, computer-readable media and other systems also are provided.Type: ApplicationFiled: December 2, 2002Publication date: June 3, 2004Inventors: Terry M. Fritz, Dana A. Jacobsen
-
Patent number: 6745266Abstract: A disk cache translation system for mapping data record lengths between systems having different data record lengths. Command queue (315) maps into initiation queue (305) to allow I/O manager (230) to manage I/O requests from operating system (125). I/O requests are statused by I/O manager (230) using status queue (325). Store-thru cache (280) provides a single interface to disk array (270) such that disk array write operations are reported complete only when user memory (250), I/O cache (280) and disk array (270) are synchronized. Data record length translations are performed using I/O cache (280) in order to align data record length differences between operating system (125) and I/O device (270).Type: GrantFiled: December 21, 2001Date of Patent: June 1, 2004Assignee: Unisys CorporationInventors: Craig B. Johnson, Dennis R. Konrad, Michael C. Otto
-
Patent number: 6745262Abstract: Disclosed is a method, system, program, and data structure for queuing requests. Each request is associated with one of a plurality of priority levels. A queue is generated including a plurality of entries. Each entry corresponds to a priority level and a plurality of requests can be queued at one entry. When a new request having an associated priority is received to enqueue on the queue, a determination is made of an entry pointed to by a pointer. The priority associated with the new request is adjusted by a value such that the adjusted priority is associated with an entry different from the entry pointed to by the pointer. The new request is queued at one entry associated with the adjusted priority.Type: GrantFiled: January 6, 2000Date of Patent: June 1, 2004Assignee: International Business Machines CorporationInventors: Michael Thomas Benhase, James Chienchiung Chen
-
Patent number: 6732199Abstract: A system and method for scheduling packet output according to a quality of service (QoS) action specification. A system is provided with a calendar queue with a plurality of bandwidth timeslots, wherein the bandwidth timeslots are organized into groups. A look-up logic circuitry inspects a group of bandwidth timeslots substantially simultaneously and determines from the group a first unoccupied bandwidth timeslot in which a current packet can be scheduled. The look-up logic circuitry also determines a first occupied bandwidth timeslot that contains a next packet to be transmitted.Type: GrantFiled: December 16, 1999Date of Patent: May 4, 2004Assignee: Watchguard Technologies, Inc.Inventors: JungJi John Yu, Chih-Wei Chao, Fu-Kuang Frank Chao
-
Patent number: 6728801Abstract: A device is presented including a host controller capable of attaching a quantity of queue heads to a frame list. The quantity of queue heads are attached to the frame list before any transaction descriptors. Further presented is a method including determining whether a queue head has less than or equal to a predetermined packet size and whether a period is one of greater than and equal to a predetermined schedule window. The method includes storing contents of a current entry in a frame list in a next pointer in the queue head. Also replacing the current entry in the frame list with a pointer to a new queue head. Many queue heads are directly coupled to the frame list.Type: GrantFiled: June 29, 2001Date of Patent: April 27, 2004Assignee: Intel CorporationInventors: Brian A. Leete, John I. Garney
-
Patent number: 6728800Abstract: A method and apparatus for an efficient performance based scheduling mechanism for handling multiple TLB operations. One method of the present invention comprises prioritizing a first translation lookaside buffer request and a second translation lookaside buffer request for handling. The first request is of a first type and the second request is of a second type. The first type having a higher priority than said second type.Type: GrantFiled: June 28, 2000Date of Patent: April 27, 2004Assignee: Intel CorporationInventors: Allisa Chiao-Er Lee, Greg S. Mathews
-
Patent number: 6714959Abstract: A circular queue is created with N Fixed Timer Entries associated with a specific address pointer for each entry. An association is developed to relate each fixed entry pointer to its just previous pointer and to its just next occurring pointer. A selected transient New Timer Entry can be inserted between any two selected adjacent Fixed Timer Entries without need to sequence serially through the entire set of fixed entries.Type: GrantFiled: June 23, 2000Date of Patent: March 30, 2004Assignee: Unisys CorporationInventors: Salil Dangi, Roger Andrew Jones
-
Patent number: 6715055Abstract: An apparatus is described in which the locations of a buffer are used to store a plurality of control packets received in a node, wherein the plurality of control packets belong to a plurality of virtual channels. The number of locations assigned to each virtual channel may be dynamically allocated. The number of locations allocated to each virtual channel may be determined by an update circuit. Count values corresponding to the number of locations allocated to each virtual channel may then be stored within a programmable storage, such as a register, for example. The count values may be subsequently copied into a slave register and incremented and decremented as locations become available and notifications corresponding to the available locations are sent, respectively.Type: GrantFiled: October 15, 2001Date of Patent: March 30, 2004Assignee: Advanced Micro Devices, Inc.Inventor: William Alexander Hughes
-
Patent number: 6701393Abstract: A device (e.g., a secondary cache device) manages descriptors which correspond to storage locations (e.g., cache blocks). The device includes memory and a control circuit coupled to the memory. The control circuit is configured to arrange the descriptors, which correspond to the storage locations, into multiple queues within the memory based on storage location access frequencies. The control circuit is further configured to determine whether an expiration timer for the particular descriptor has expired in response to a particular descriptor reaching a head of a particular queue. The control circuit is further configured to move the particular descriptor from the head of the particular queue to a different part of the multiple queues, wherein the different part is identified based on access frequency when the expiration timer for the particular descriptor has not expired, and not based on access frequency when the expiration timer for the particular descriptor has expired.Type: GrantFiled: June 27, 2002Date of Patent: March 2, 2004Assignee: EMC CorporationInventors: John Kemeny, Naizhong Qui, Xueying Shen
-
Patent number: 6697888Abstract: Providing electrical isolation between the chipset and the memory data is disclosed. The disclosure includes providing at least one buffer in a memory interface between a chipset and memory modules. Each memory module includes a plurality of memory ranks. The buffers allow the memory interface to be split into first and second sub-interfaces. The first sub-interface is between the chipset and the buffers. The second sub-interface is between the buffers and the memory modules. The method also includes interleaving output of the buffers, and configuring the buffers to properly latch the data being transferred between the chipset and the memory modules. The first and second sub-interfaces operate independently but in synchronization with each other.Type: GrantFiled: September 29, 2000Date of Patent: February 24, 2004Assignee: Intel CorporationInventors: John B. Halbert, Jim M. Dodd, Chung Lam, Randy M. Bonella
-
Patent number: 6694388Abstract: A dynamic queuing system wherein a single memory is shared among a plurality of different queues. A single memory, termed a queue memory, is by ally shared by one or more queue. The queue memory is divided into a plurality of memory blocks that we initially empty. An empty list functions to track which memory blocks are empty and available for use in a queue. Each queue constructed utilizes one or more memory blocks. When a queue becomes full, an additional memory block is allocated to it. Conversely, as memory blocks of a queue are read, i.e. emptied, they are returned to the pool of empty memory blocks for use by other queued.Type: GrantFiled: May 31, 2000Date of Patent: February 17, 2004Assignee: 3Com CorporationInventors: Golan Schzukin, Roni Elran, Zvika Bronstein, Ilan Shimony
-
Patent number: 6680737Abstract: Frame buffer memory bandwidth is conserved by performing a depth comparison between colliding pixels at batch building time. If the incoming pixel fails the depth comparison, then it may be “tossed” and excluded from any batches currently under construction. The batch building process may then continue without the need for a batch flush responsive to the occurrence of the pixel collision. If the incoming pixel passes the depth comparison, then it may yet be possible to avoid flushing: The current rendering mode of the pipeline is determined. If the current rendering mode does not require read-modify-write operations, then the incoming pixel may be merged with the buffered pixel with which it collides. Merger of the two pixels may be accomplished by overwriting the buffered RGBA pixel components with those of the incoming pixel, but only those components corresponding to asserted bits in the incoming pixel's BEN.Type: GrantFiled: December 12, 2002Date of Patent: January 20, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Jon L Ashburn, Darel N Emmot, Byron A Alcorn
-
Publication number: 20040003149Abstract: Decimation of data from a fixed length queue retaining a representative sample of the old data. Exponential decimation removes every nth sample. Dithered exponential decimation offsets the exponential decimation approach by a probabilistic amount. Recursive decimation selects a portion of the queue and removes elements.Type: ApplicationFiled: June 26, 2002Publication date: January 1, 2004Inventors: Glenn R. Engel, Bruce Hamilton
-
Patent number: 6668291Abstract: Multiple non-blocking FIFO queues are concurrently maintained using atomic compare-and-swap (CAS) operations. In accordance with the invention, each queue provides direct access to the nodes stored therein to an application or thread, so that each thread may enqueue and dequeue nodes that it may choose. The prior art merely provided access to the values stored in the node. In order to avoid anomalies, the queue is never allowed to become empty by requiring the presence of at least a dummy node in the queue. The ABA problem is solved by requiring that the next pointer of the tail node in each queue point to a “magic number” unique to the particular queue, such as the pointer to the queue head or the address of the queue head, for example. This obviates any need to maintain a separate count for each node.Type: GrantFiled: September 9, 1999Date of Patent: December 23, 2003Assignee: Microsoft CorporationInventors: Alessandro Forin, Andrew Raffman
-
Patent number: 6665753Abstract: A method, system, and apparatus for modifying bridges within a data processing system to provide improved performance is provided. In one embodiment, the data processing system determines the number of input/output adapters connected underneath each PCI host bridge. The data processing system also determines the type of each input/output adapter. The size and number of buffers within the PCI host bridge is then modified based on the number of adapters beneath it as well as the type of adapters beneath it to improve data throughput performance as well as prevent thrashing of data. The PCI host bridge is also modified to give load and store operations priority over DMA operations. Each PCI-to-PCI bridge is modified based on the type of adapter connected to it such that the PCI-to-PCI bridge prefetches only an amount of data consistent with the type of adapter such that excess data is not thrashed, thus requiring extensive repetitive use of the system buses to retrieve the same data more than once.Type: GrantFiled: August 10, 2000Date of Patent: December 16, 2003Assignee: International Business Machines CorporationInventors: Pat Allen Buckland, Michael Anthony Perez, Kiet Anh Tran, Adalberto Guillermo Yanes
-
Patent number: 6661803Abstract: A network switch includes a plurality of receive ports for receiving addressed data packets and a plurality of transmit ports for forwarding the addressed data packets and is responsive to data in the packets for directing received packets to the transmit ports. The switch includes, with respect to at least one transmit port, a bandwidth controller for at least one selected packet type. The bandwidth controller diminishes an aggregate count in response to the sizes of packets of the one type destined for the transmit port and continually augments the aggregate count at a selectable rate. The switch compares the aggregate count with a threshold and initiates a discard of packets of the one type before they can be forwarded from the transmit port so as to limit the proportion of available bandwidth occupied by packet so of the one type with respect to the transmit port.Type: GrantFiled: December 30, 1999Date of Patent: December 9, 2003Assignee: 3Com CorporationInventors: Kam Choi, Patrick Gibson, Christopher Hay, Gareth E Allwright
-
Patent number: 6651116Abstract: An output interface allows a user circuit to access data for multiple objects in an interleaved fashion. Status information is provided to guarantee data availability before each transfer sequence is started. An identifier is provided for each object. Each identifier, after data transfer has ended, may be subsequently reused to identify a different object. The interface provides the ability to retrieve all data in an object or to cancel the object before reaching the end and discarding the unretrieved data. The objects are provided to the appropriate processing mechanisms within the printer to implement a printing task. These objects correspond to images and text to be printed on a page. Object data is temporarily stored in limited data memory of the memory system and object headers are stored in header memory before transfer via the output interface. Each object to be printed has an object header and may, or may not, have associated object data.Type: GrantFiled: May 15, 2000Date of Patent: November 18, 2003Assignee: International Business Machines CorporationInventors: Steven G. Ludwig, Stephen D. Hanna, Howard C. Jackson
-
Patent number: 6647505Abstract: A system and method using a timer management module for managing a circular queue having fixed timer entries and temporary new timer entries to enable location of specified new timer entries which can then be deleted at the appropriate time in timer management operations.Type: GrantFiled: June 23, 2000Date of Patent: November 11, 2003Assignee: Unisys CorporationInventors: Salil Dangi, Roger Andrew Jones
-
Patent number: 6643718Abstract: A barrier control scheme controls the order dependency of items in a multiple FIFO queue structure. The barrier control scheme includes a cycle ID generator, a barrier bit/barrier ID generator and a cycle ID and barrier ID comparator. Each incoming item to the FIFOs is assigned a cycle ID. If an incoming item of a first FIFO has order dependency on items of a second FIFO, a barrier bit is set and a barrier ID is determined and generated by the barrier bit/barrier ID generator. The barrier bit and barrier ID are inserted in the first FIFO along with other fields of the incoming item. When an item is to be consumed, the cycle ID and barrier ID comparator compares its barrier ID and the cycle IDs of items in the second FIFO. The item to be consumed is blocked until all items on which the item is dependent are consumed in the second FIFO.Type: GrantFiled: July 21, 2000Date of Patent: November 4, 2003Assignee: Silicon Integrated Systems CorporationInventors: Chao-Yu Chen, Hui-Neng Chang, Sui-His Chu
-
Patent number: 6640245Abstract: A computer network guarantees timeliness to distributed real-time applications by allowing an application to specify its timeliness requirements and by ensuring that a data source can meet the specified requirements. A reflective memory area is established by either a data source of an application. A data source maps onto this reflective memory area and writes data into it. In order to receive data from this data source, an application requests attachment to the reflective memory area to which the data source is mapped and specifies timeliness requirements. The application may specify that it needs data either periodically or upon occurrence of some condition. The application allocates buffers at its local node to receive data. The data source then establishes a data push agent thread at its local node, and a virtual channel over the computer network between the data push agent thread and the application attached to its reflective memory area.Type: GrantFiled: September 1, 1999Date of Patent: October 28, 2003Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Chia Shen, Ichiro Mizunuma
-
Patent number: 6636909Abstract: According to the invention, systems, apparatus and methods are disclosed for throttling commands to a storage device is disclosed. This method comprises sending a write request to a disk, receiving a queue full signal from the disk if the disk queue is full, and responsive to receiving the queue full signal setting a throttle value.Type: GrantFiled: July 5, 2000Date of Patent: October 21, 2003Assignee: Sun Microsystems, Inc.Inventors: James Kahn, Robert S. Tracy
-
Patent number: 6633575Abstract: A method and apparatus for avoiding packet reordering in multiple-class, multiple-priority networks. The present invention provides a queue implementation technique that can be used in multiple-class, multiple-priority networks such as Differentiated Services networks to ensure that packets are serviced without reordering. The queue implementation technique maintains performance isolation between different classes under some scheduling disciplines and can identify scheduling disciplines which do not degrade the performance seen by the lower priority traffic classes.Type: GrantFiled: April 7, 1999Date of Patent: October 14, 2003Assignee: Nokia CorporationInventor: Rajeev Koodli
-
Patent number: 6633972Abstract: A system and method for substituting dynamic pipelines with static queues in a pipelined processor. The system and method are to provide a reduction in power consumption and clock distribution, as well as other advantages.Type: GrantFiled: June 7, 2001Date of Patent: October 14, 2003Assignee: Intel CorporationInventor: Victor Konrad
-
Patent number: 6631428Abstract: A mechanism that includes an apparatus and method for ensuring that all transactions within any flow control class completes is herein provided. The mechanism includes a plunge transaction that is inserted in each pending transaction queue and which is transmitted to a particular destination device. All prior transactions in a flow control class are deemed to be complete when the destination device receives the plunge transactions in the flow control class.Type: GrantFiled: May 1, 2000Date of Patent: October 7, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Debendra Das Sharma, Edward M. Jacobs, John A. Wickeraad