Prioritized Polling Patents (Class 710/44)
-
Patent number: 6834315Abstract: Provided is a method, system, and program for managing Input/Output (I/O) requests generated by an application program. The I/O requests are transmitted to an output device. A determination is made of a priority associated with the I/O request, wherein the priority is capable of being at least one of a first priority and a second priority. The I/O request is transmitted if the determined priority is the first priority. Transmittal of the I/O request is deferred if the determined priority is the second priority.Type: GrantFiled: March 26, 2001Date of Patent: December 21, 2004Assignee: International Business Machines CorporationInventor: Richard H. Johnson
-
Publication number: 20040225762Abstract: A storage apparatus is proposed for facilitating wireless communication between a computer device and one or more external portable electronic devices, or between those external devices. The storage apparatus includes a wireless transceiver for entering communication with any of one the devices. When the storage apparatus is communicating with any of the devices, it can transmit to that device any data stored in its memory for transmission to that device. Furthermore, the storage apparatus can receive from that device, and transmit to its memory, data to be relayed to another of the devices.Type: ApplicationFiled: June 14, 2004Publication date: November 11, 2004Inventor: Teng Pin Poo
-
Patent number: 6807588Abstract: A sectioned ordered queue in an information handling system comprises a plurality of queue sections arranged in order from a first queue section to a last queue section. Each queue section contains one or more queue entries that correspond to available ranges of real storage locations and are arranged in order from a first queue entry to a last queue entry. Each queue section and each queue entry in the queue sections having a weight factor defined for it. Each queue entry has an effective weight factor formed by combining the weight factor defined for the queue section with the weight factor defined for the queue entry. A new entry is added to the last queue section to indicate a newly available corresponding storage location, and one or more queue entries are deleted from the first section of the queue to indicate that the corresponding storage locations are no longer available.Type: GrantFiled: February 27, 2002Date of Patent: October 19, 2004Assignee: International Business Machines CorporationInventors: Tri M. Hoang, Tracy D. Butler, Danny R. Sutherland, David B. Emmes, Mariama Ndoye, Elpida Tzortzatos
-
Publication number: 20040177147Abstract: Techniques are disclosed for efficiently updating rendered content (such as content of a Web page) using a “slow-loading” content element, such as a slow-loading image. A reference is embedded within the markup language notation for the content to be rendered, where this reference identifies the source of the slow-loading content element. Delivery of the slow-loading content therefore begins automatically, when the content is rendered. Event handling attributes are specified with the reference, where values of these attributes identify client-side logic to be invoked when the associated event occurs. If the server determines that the rendered content, or some portion thereof, should be asynchronously updated, it abruptly terminates delivery of the slow-loading content. This termination triggers an event handler, which operates to automatically request reloading of the content.Type: ApplicationFiled: March 7, 2003Publication date: September 9, 2004Applicant: International Business Machines CorporationInventors: Niraj P. Joshi, Robert C. Leah, Paul F. McMahan
-
Patent number: 6789175Abstract: A synchronous dynamic random access memory (“SDRAM”) operates with matching read and write latencies. To prevent data collision at the memory array, the SDRAM includes interim address and interim data registers that temporarily store write addresses and input data until an available interval is located where no read data or read addresses occupy the memory array. During the available interval, data is transferred from the interim data register to a location in the memory array identified by the address in the interim array register. In one embodiment, the SDRAM also includes address and compare logic to prevent reading incorrect data from an address to which the proper data has not yet been written. In another embodiment, a system controller monitors commands and addresses and inserts no operation commands to prevent such collision of data and addresses.Type: GrantFiled: October 9, 2001Date of Patent: September 7, 2004Assignee: Micron Technology, Inc.Inventors: Kevin J. Ryan, Terry R. Lee
-
Patent number: 6771655Abstract: A method and apparatus for managing transportation of data include processing that begins by polling a plurality of local memory entities for transportation of data, wherein the polling is based on a linked list. When a currently polled local memory entity has data to transport, the processing obtains channels status of a logical channel associated with the currently polled memory entity. Note that the data to transport is contained within a data word that includes a data portion and a tag. Regardless of whether the data to transport is to be transported from local memory to non-local memory or from the non-local memory to local memory, the processing determines data block status based on the data word, wherein the data block is stored in non-local memory. Next, the processing provides, or retrieves, the data portion of the data word to, or from, the non-local memory based on at least one of the channel status, the data word, and the data block status.Type: GrantFiled: May 29, 1998Date of Patent: August 3, 2004Assignee: Alcatel Canada Inc.Inventors: Gareth P. O'Loughlin, Michel J. P. Patoine, J. Morgan Smail
-
Publication number: 20040148444Abstract: The present invention provides a system and method for optimizing a cache that substantially eliminates reduces the disadvantages of previously developed cache management systems. More particularly, embodiments of the present invention provide a system of optimizing a cache by polling cached assets with a frequency dependent of the relative activity of a cached asset. An embodiment of the method of the system includes the steps of: (i) polling a cached asset according to a first schedule to determine if the cached asset has been active within a first predefined period of time; (ii) if the cached asset has not been active, polling the cached asset according to a second schedule to determine if the cached asset has been inactive for at least a second predefined period of time; (iii) demoting the cached asset to less active status; and (iv) if the cached asset has been inactive for at least within the second predefined period of time, demoting the cached asset to inactive status.Type: ApplicationFiled: January 5, 2004Publication date: July 29, 2004Inventors: David Thomas, Scott Wells
-
Patent number: 6763439Abstract: A system is configured to prioritize streaming disk I/O over non-streaming disk I/O by providing high priority queuing to streaming disk I/O and/or to throttle non-streaming disk I/O when the total disk I/O (streaming+non-streaming) exceeds a threshold amount for a given time quantum. When disk throttling is utilized, streaming disk I/O is processed in a first time quantum. Non-streaming disk I/O is processed, as much as possible, in the remainder of the first time quantum. Other non-streaming disk I/O remaining to be processed is deferred to a subsequent time quantum.Type: GrantFiled: May 1, 2000Date of Patent: July 13, 2004Assignee: Microsoft CorporationInventors: David S. Bakin, William G. Parry, Mark H. Lucovsky
-
Patent number: 6742039Abstract: A novel system and method for connecting to an entity behind a firewall or proxy enhances network security and eliminates the costs and risks associated with modifying the firewall or proxy. The invention uses a trusted arbitrator as an intermediary between (1) a local area network protected by an access control mechanism such as a firewall or proxy and (2) external entities seeking to connect with an entity within the network. Requests from external entities are routed to the trusted arbitrator, which communicates with a connection entity located behind the firewall or proxy.Type: GrantFiled: December 20, 1999Date of Patent: May 25, 2004Assignee: Intel CorporationInventors: Eric B. Remer, David A. King, David L. Remer
-
Patent number: 6728800Abstract: A method and apparatus for an efficient performance based scheduling mechanism for handling multiple TLB operations. One method of the present invention comprises prioritizing a first translation lookaside buffer request and a second translation lookaside buffer request for handling. The first request is of a first type and the second request is of a second type. The first type having a higher priority than said second type.Type: GrantFiled: June 28, 2000Date of Patent: April 27, 2004Assignee: Intel CorporationInventors: Allisa Chiao-Er Lee, Greg S. Mathews
-
Patent number: 6553434Abstract: A system and method of decoupling timing in a high speed bus system. A master/slave translator is coupled between a master device and a slave device. A pseudo slave of the master/slave translator responds to the master in a first timing protocol. A pseudo master of the master/slave translator masters the slave devices under a different timing protocol. The master/slave translator causes the master to believe its communications with the slave device are occurring under the first protocol.Type: GrantFiled: August 5, 1999Date of Patent: April 22, 2003Assignee: Occam NetworksInventors: Alfred Abkarian, Kiran Munj, Harun Muliadi
-
Patent number: 6411218Abstract: In the context of a bus-mastering system, a device selector selects the device to control the bus by assigning “combined” priority values to the devices and selecting the device with the highest combined-priority value. The combined-priority values include relatively high-significance device-specific values and relatively low-significance arbitrary-rank values. At any given time, no two devices share the same arbitrary-rank values, and thus cannot share combined-priority values. Thus, there are no unresolved selections due to equal priorities. In accordance with the present invention, the arbitrary-rank values are varied in a round-robin fashion to minimize the bias inherent in conventional schemes using a priority encoder. This makes the device selection process conform better to the device-specific values, which are presumable selected to optimize system performance.Type: GrantFiled: January 22, 1999Date of Patent: June 25, 2002Assignee: Koninklijke Philips Electronics N.V.Inventor: Mark W. Johnson
-
Patent number: 6253260Abstract: Disclosed is a system and method for processing a data access request (DAR). A processing unit, such as a storage controller, receives a DAR, indicating data to return on a channel, such as a channel connecting to a host system, and priority information for the received DAR. The processing unit retrieves the requested data for the received DAR from a memory area, such as a cache or direct access storage device (DASD), and determines whether there is a queue of data entries indicating retrieved data for DARs to transfer on the channel. The queued DARs include priority information. The processing unit processes at least one data entry in the queue, the priority information for the data entry, and the priority information for the received DAR to determine a position in the queue for the received DAR. The processing unit then indicates that the received DAR is at the determined position in the queue and processes the queue to select retrieved data to transfer on the channel to the host system.Type: GrantFiled: October 22, 1998Date of Patent: June 26, 2001Assignee: International Business Machines CorporationInventors: Brent Cameron Beardsley, James Lincoln Iskiyan, Harry Morris Yudenfriend
-
Patent number: 6115758Abstract: The present invention relates to a slot control method of a multi-port network switch and a switch structure therefor. More particularly, the present invention relates to a slot control method of a shared memory structure with a fixed sequence and a dynamic slot effect. According to the present invention, a slot processor is provided in a slot controller of a network switch for controlling and sequentially allowing a plurality of transportation ports connected to the slot controller to perform data transmission in a fixed round-robin manner while a maximum allowable slot time is set. The slot controller continuously detects whether active transportation port sends an operation request signal or whether the maximum allowable slot time is exceeded. If there is no operation request signal or the allowable slot time is exceeded, data transmission of the next transportation port is allowed and performed immediately, thereby reducing the packet latency.Type: GrantFiled: October 19, 1998Date of Patent: September 5, 2000Assignee: Accton Technology CorporationInventor: Aphrodite Chen
-
Patent number: 6023720Abstract: The disk scheduling system supports the processing of simultaneous storage device read and write requests in a video server environment, thereby supporting both video-on-demand and non-linear editing applications. Read requests are the result of movie viewing, while write requests are the result of video clip editing or movie authoring procedures. Due to real-time demands of movie viewing, read requests have to be fulfilled within certain deadlines, otherwise they are considered lost. Since the data to be written into the storage device is stored in main memory buffers (or write buffers), write requests can be postponed until critical read requests are processed. However, write requests still have to be proceeded within reasonable delays and without the possibility of indefinite postponement. This is due to the physical constraint of the limited size of the main memory buffers.Type: GrantFiled: February 9, 1998Date of Patent: February 8, 2000Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Walid G. Aref, Ibrahim Kamel, Thirumale N. Niranjan, Shahram Ghandeharizadeh
-
Patent number: 5978866Abstract: Higher speed data transactions between a host computer's system memory and a plurality of slow peripheral devices are accomplished by providing distributed DMA functions along with distributed pre-fetch buffers. The first I/O device accesses the host bus via a first DMA channel and a first pre-fetch buffer, the second I/O device accesses the host bus via a second DMA channel and a second pre-fetch buffer, and the third I/O device accesses the host bus via a third DMA channel and a third pre-fetch buffer. In a first DMA transaction, the first pre-fetch buffer is filled with data being transferred between the first I/O device and the host system memory. While the data are transferred between the pre-fetch buffer and either the first I/O device or the system memory, the second pre-fetch buffer is being filled pursuant to a second DMA transaction between the second I/O device and the system memory.Type: GrantFiled: May 16, 1997Date of Patent: November 2, 1999Assignee: Integrated Technology Express, Inc.Inventor: Yueh-Yao Nain