Buffer Space Allocation Or Deallocation Patents (Class 710/56)
-
Patent number: 7840768Abstract: System-directed checkpointing is enabled in otherwise standard computers through relatively straightforward augmentations to the computer's memory controller hub.Type: GrantFiled: October 16, 2009Date of Patent: November 23, 2010Assignee: Reliable Technologies, Inc.Inventors: Jack Justin Stiffler, Donald D. Burn
-
Patent number: 7835021Abstract: Systems, methods, and media for managing the print speed of a variable speed printer are disclosed. Embodiments include a print controller system having a raster image processor for rasterizing a print job to create a plurality of rasterized pages and a printer controller buffer for storing one or more of the rasterized pages. The printer controller buffer may also transmit at a print engine feed rate the one or more rasterized pages to a print engine. Embodiments may also include a speed control module in communication with the printer controller buffer for determining the print engine feed rate. Further embodiments may include the speed control module determining the print engine feed rate based on one or more of page processing times, page arrival rates, estimated print completion rates, and the number of pages in a print engine buffer.Type: GrantFiled: May 23, 2005Date of Patent: November 16, 2010Assignee: Infoprint Solutions Company, LLCInventor: John Thomas Varga
-
Patent number: 7836231Abstract: A buffer control method for controlling packets to be stored in a buffer having a data region and a command queue region. First, the number of the packets that can be stored in the data buffer is determined. Then, a count value representing the remained capacity of the data region is updated. Finally, the count value and a value of maximum data length are compared to determine whether to increase the number of the packets that can be stored in the buffer.Type: GrantFiled: May 11, 2007Date of Patent: November 16, 2010Assignee: Via Technologies, Inc.Inventors: I-Lin Hsieh, Chun-Yuan Su
-
Patent number: 7822891Abstract: A system and method for storing a multidimensional array of data, such as a two dimensional (2-D) array of video data, in a non-contiguous memory space. The system and method maps individually indexed elements of a multidimensional array of data from a source device into blocks of non-contiguous memory available in a destination memory system, even when the destination blocks are small and/or their size does not correlate in any way to the dimensions of a source buffer. In particular, the blocks of non-contiguous memory may be as small as a single element of the data indexed in the 2-D array.Type: GrantFiled: June 13, 2006Date of Patent: October 26, 2010Assignee: Broadcom CorporationInventors: Glen T. McDonnell, Martin E. Perrigo
-
Patent number: 7822814Abstract: Method, apparatus and article of manufacture for acquiring a buffer after data from a remote sender (e.g., client) has been received by a local machine (e.g., server). Because the client data has already been received when the buffer is acquired, the buffer may be sized exactly to the size of the client data. In general, the buffer may be caller supplied or system supplied.Type: GrantFiled: March 27, 2008Date of Patent: October 26, 2010Assignee: International Business Machines CorporationInventors: Mark Linus Bauman, Bob Richard Cernohous, Kent L. Hofer, John Charles Kasperski, Steven John Simonson, Jay Robert Weeks
-
Patent number: 7818478Abstract: A mechanism is disclosed for performing I/O operations using queue banks within a data processing system that supports multiple processing partitions. A queue bank is a re-useable area of memory allocated for performing I/O operations. All memory locking and address-translation functions are generally performed only once for a queue bank to reduce system overhead. After a queue bank has been used to perform an I/O operation, some processing is performed to make it available for re-use. This processing determines whether the queue bank contains memory that is being removed from a current processing partition. If so, a delay is imposed so that the queue bank is not made available for immediate re-use. This creates a window of time wherein all queue banks that contain the affected memory are inactive, thereby allowing the affected memory to be removed from the partition without halting on-going I/O activity.Type: GrantFiled: November 23, 2009Date of Patent: October 19, 2010Assignee: Unisys CorporationInventor: David W. Schroth
-
Patent number: 7818744Abstract: An apparatus and method for redundant transient fault detection. In one embodiment, the method includes the replication of an application into two communicating threads, a leading thread and a trailing thread. The trailing thread may repeat computations performed by the leading thread to detect transient faults, referred to herein as “soft errors.” A first in, first out (FIFO) buffer of shared memory is reserved for passing data between the leading thread and the trailing thread. The FIFO buffer may include a buffer head variable to write data to the FIFO buffer and a buffer tail variable to read data from the FIFO buffer. In one embodiment, data passing between the leading thread data buffering is restricted according to a data unit size and thread synchronization between a leading thread and the trailing thread is limited to buffer overflow/underflow detection. Other embodiments are described and claimed.Type: GrantFiled: December 30, 2005Date of Patent: October 19, 2010Assignee: Intel CorporationInventors: Cheng C. Wang, Youfeng Wu
-
Patent number: 7813278Abstract: A system provides congestion control and includes multiple queues that temporarily store data and a drop engine. The system associates a value with each of the queues, where each of the values relates to an amount of memory associated with the queue. The drop engine compares the value associated with a particular one of the queues to one or more programmable thresholds and selectively performs explicit congestion notification or packet dropping on data in the particular queue based on a result of the comparison.Type: GrantFiled: February 27, 2008Date of Patent: October 12, 2010Assignee: Juniper Networks, Inc.Inventors: Pradeep Sindhu, Debashis Basu, Jayabharat Boddu, Avanindra Godbole
-
Patent number: 7809739Abstract: A method and system for enabling dynamic matching of storage utilization characteristics of a host system application with the characteristics of the available storage pools of an attached distributed storage system, in order to provide an optimal match between the application and selected storage pool. An abstraction manager is provided, enhanced with a storage device configuration utility/module, which performs a series of tasks to (1) obtain/collect the correct configuration information from each connected storage device or storage pools and/or (2) calculate the configuration information when the information is not readily available. The storage device configuration module then normalizes, collates and matches the configuration information to the various applications running on the host system and/or outputs the information to a user/administrator of the host system via a software interface.Type: GrantFiled: August 5, 2005Date of Patent: October 5, 2010Assignee: International Business Machines CorporationInventors: James Patrick Allen, Matthew Albert Huras, Thomas Stanley Mathews, Lance Warren Russell
-
Publication number: 20100251245Abstract: Task and data management systems methods and apparatus are disclosed. A processor event that requires more memory space than is available in a local storage of a co-processor is divided into two or more segments. Each segment has a segment size that is less than or the same as an amount of memory space available in the local storage. The segments are processed with one or more co-processors to produce two or more corresponding outputs. The two or more outputs are associated into one or more groups. Each group is less than or equal to a target data size associated with a subsequent process.Type: ApplicationFiled: June 8, 2010Publication date: September 30, 2010Applicant: Sony Computer Entertainment Inc.Inventors: Richard B. Stenson, John P. Bates
-
Patent number: 7802032Abstract: A dummy node is enqueued to a concurrent, non-blocking, lock-free FIFO queue only when necessary to prevent the queue from becoming empty. The dummy node is only enqueued during a dequeue operation and only when the queue contains a single user node during the dequeue operation. This reduces overhead relative to conventional mechanisms that always keep a dummy node in the queue. User nodes are enqueued directly to the queue and can be immediately dequeued on-demand by any thread. Preferably, the enqueueing and dequeueing operations include the use of load-linked/store conditional (LL/SC) synchronization primitives. This solves the ABA problem without requiring the use a unique number, such as a queue-specific number, and contrasts with conventional mechanisms that include the use of compare-and-swap (CAS) synchronization primitives and address the ABA problem through the use of a unique number.Type: GrantFiled: November 13, 2006Date of Patent: September 21, 2010Assignee: International Business Machines CorporationInventor: David Alan Christenson
-
Patent number: 7802062Abstract: Buffer management system. A ring buffer may be implemented. The ring buffer includes a number of zones. Each of the zones includes state fields. The state fields include a filled indicator indicating whether the zone is full. The state fields for the zone further include a committed indicator indicating whether data in the zone is readable. The state fields for the zone also include a recycling indicator indicating whether the zone can be recycled. The ring buffer includes entries in the zones. Each of the entries includes state information. The entry state information includes a zone offset indication indicating a memory offset into the zone. The entry state information further includes a size indicating the size of the entry. The entry state information also includes a committed indicator indicating that the entry is readable.Type: GrantFiled: September 28, 2007Date of Patent: September 21, 2010Assignee: Microsoft CorporationInventor: Adrian Marinescu
-
Patent number: 7802066Abstract: An efficient memory management method for handling large data volumes, comprising a memory management interface between a plurality of applications and a physical memory, determining a priority list of buffers accessed by the plurality of applications, providing efficient disk paging based on the priority list, ensuring sufficient physical memory is available, sharing managed data buffers among a plurality of applications, mapping and unmapping data buffers in virtual memory efficiently to overcome the limits of virtual address space.Type: GrantFiled: February 8, 2006Date of Patent: September 21, 2010Assignee: Siemens Medical Solutions USA, Inc.Inventors: Gianluca Paladini, Thomas Moeller
-
Publication number: 20100228898Abstract: An apparatus, system, and method are disclosed for dynamically allocating buffers during the execution of a job. A plan module sets a buffer allocation plan for the job using data access history that contains information about the number and nature of data access events in past executions of the same job. A buffer module allocates buffers during the execution of the job, and alters the buffer allocation to improve performance for direct access events for those portions of the job that the buffer allocation plan indicates have historically included predominantly direct access events. The buffer module alters the buffer allocation to improve performance for sequential access events for those portions of the job that the buffer allocation plan indicates have historically included predominantly sequential access events. A history module then collects data access information about the current execution and adds that information to the data access history.Type: ApplicationFiled: March 5, 2009Publication date: September 9, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Herman Aranguren, David Bruce LeGendre, David Charles Reed, Max Douglas Smith
-
Patent number: 7788426Abstract: An apparatus and method for initializing an elastic buffer are provided. The elastic buffer, a FIFO buffer, outputs and writes data according to a reading index and a writing index, respectively. First, a random number is generated. Then, the writing index is determined according to the random number and the reading index. Finally, the elastic buffer is initialized according to the writing index and the reading index.Type: GrantFiled: May 28, 2009Date of Patent: August 31, 2010Assignee: Via Technologies, Inc.Inventor: Chi-Wei Shih
-
Patent number: 7788463Abstract: Systems and methods for cyclic buffer management are described. In one aspect, the systems and methods enable cyclic buffer wrap-around write operations independent of temporary buffer allocations and corresponding multiple data copies. To this end, the systems and methods map the cyclic buffer's physical addresses two-times (“doubly map”) to a consecutive block of virtual memory. When an application desires to write beyond the end of what represents the cyclic buffer, the systems and methods provide a logical pointer to the doubly mapped virtual memory, enabling the application to directly write into the virtual memory from the end of the first cyclic buffer mapping to the beginning of the second cyclic buffer mapping. The memory management operations automatically transfer the data out of this doubly mapped virtual memory into the cyclic buffer for subsequent extraction and presentation to the user.Type: GrantFiled: February 13, 2007Date of Patent: August 31, 2010Assignee: Microsoft CorporationInventor: Kenneth H. Cooper
-
Patent number: 7783039Abstract: In a digital recording apparatus including a data control circuit 2a, a memory 4, an encryption circuit 5, an interface 6, a DVD drive 8, and a CPU 3, when encryption is required during recording, data is temporarily stored in the memory 4. After the encryption circuit 5 is enabled, the data is encrypted and recording by the DVD drive 8 on a recording medium is resumed. Thus, it is possible to make the encryption circuit operate only when recording a program requiring a content protection and to perform recording or reproducing from the required timing without interrupting the recording or reproducing even during start-up of the encryption circuit.Type: GrantFiled: July 16, 2004Date of Patent: August 24, 2010Assignee: Mitsubishi Denki Kabushiki KaishaInventor: Tomoaki Ryu
-
Patent number: 7783802Abstract: An embodiment of the present invention includes a switch employed in a system having two hosts and a device and for coupling two or more host ports to a device. The switch includes a power signal control circuit generating a power signal for use by the device in receiving power for operability thereto, the power signal control circuit responsive to detection of inoperability of the device and in response thereto, toggling the power signal to the device while avoiding interruption to the system.Type: GrantFiled: July 21, 2005Date of Patent: August 24, 2010Assignee: LSI CorporationInventors: Siamack Nemazie, Shiang-Jyh Chang, Young-Ta Wu
-
Patent number: 7779181Abstract: A file allocation system for a hard disk drive includes a memory with driver logic and a processor configured with the driver logic to receive a request to allocate hard disk space of a defined size for a buffer file. In some embodiments, the processor is configured with the driver logic to allocate clusters for the buffer file from a plurality of clusters on the hard disk, wherein the clusters for the buffer file store media content instances. In some embodiments, the processor is configured with the driver logic to designate a portion of the clusters of the buffer file for at least one non-buffer file such that the non-buffer file is permitted to share the portion of the clusters of the buffer file with the buffer file.Type: GrantFiled: February 27, 2007Date of Patent: August 17, 2010Assignee: Scientific-Atlanta, LLCInventor: Harold J. Plourde, Jr.
-
Patent number: 7774571Abstract: Provided is a system, deployment and program for resource allocation unit queuing in which an allocation unit associated with a task is classified. An allocation unit freed as the task ends is queued for use by another task in a queue at a selected location within the queue in accordance with the classification of said allocation unit. In one embodiment, an allocation unit is queued at a first end of the queue if classified in a first class and is queued at a second end of the queue if classified in said second class. Other embodiments are described and claimed.Type: GrantFiled: December 10, 2008Date of Patent: August 10, 2010Assignee: International Business Machines CorporationInventors: Michael Thomas Benhase, Lawrence Carter Blount, James Chien-Chiung Chen, Juan Alonso Coronado, Roger Gregory Hathorn
-
Patent number: 7774077Abstract: An audio context object gathers multiple channels of audio data from an audio device and stores each channel of data separately in a ring buffer. Clients of the audio context can request any number of channels of data at any interval from the audio context. Multiple clients can share the same audio device. The ring buffer used by the audio context object stores the channels of audio data in a two-dimensional array such that each channel of audio data is stored in contiguous memory.Type: GrantFiled: June 21, 2005Date of Patent: August 10, 2010Assignee: Apple Inc.Inventor: Bradley D. Ford
-
Patent number: 7769925Abstract: A file allocation system for a hard disk drive includes a memory with driver logic and a processor configured with the driver logic to receive a request to allocate hard disk space of a defined size for a buffer file. In some embodiments, the processor is configured with the driver logic to allocate clusters for the buffer file from a plurality of clusters on the hard disk, wherein the clusters for the buffer file store media content instances. In some embodiments, the processor is configured with the driver logic to designate a portion of the clusters of the buffer file for at least one non-buffer file such that the non-buffer file is permitted to share the portion of the clusters of the buffer file with the buffer file.Type: GrantFiled: May 5, 2006Date of Patent: August 3, 2010Assignee: Scientific-Atlanta LLCInventor: Harold J. Plourde, Jr.
-
Patent number: 7769926Abstract: A method for providing a buffer status report in a mobile communication network is implemented between a base station and a user equipment. When data arrives to buffers of the user equipment and the priority of a logical channel for the data is higher than those of other logical channels for existing data in the buffers, a short buffer status report associated with the buffer of a logical channel group corresponding to the arrival data is triggered. The user equipment is based on obtained resources allocated by the base station to fill all data of the buffer of the logical channel group in a Protocol Data Unit. If all data of the buffer of the logical channel group corresponding to the arrival data can be completely filled in the Protocol Data Unit, the short buffer status report is canceled. Otherwise, the user equipment transmits the short buffer status report.Type: GrantFiled: October 28, 2008Date of Patent: August 3, 2010Assignee: Sunplus mMobile Inc.Inventors: Chunli Wu, Tsung-Liang Lu, Chung-Shan Wang, Yen-Chen Chen, Li-Cheng Lin
-
Patent number: 7769922Abstract: A processing system for accessing first and second data types. The first data type is data supplied from a peripheral and the second data type is randomly accessible data held in a data memory. The processing system includes: a processor for executing instructions; a stream register unit connected to supply data from the peripheral to the processor; and a FIFO. The FIFO is connected to receive data from the peripheral and connected to the stream register unit by a communication path, along which the received data can be supplied from the FIFO to the stream register unit. The Processing system also includes a memory bus connected between the data memory and the processor, across which the processor can access the randomly accessible data.Type: GrantFiled: October 30, 2003Date of Patent: August 3, 2010Assignee: STMicroelectronics (R&D) Ltd.Inventors: Mark Owen Homewood, Antonio Maria Borneo
-
Publication number: 20100191878Abstract: Methods and apparatuses for preventing overflow at a receiver buffer are provided. Data packets of varying size are received into a receiver buffer and quantified by a byte counter to determine an amount of data in the receiver buffer at a given time. A data capacity status for the receiver buffer is then generated as a function of the amount of data in the receiver buffer.Type: ApplicationFiled: March 12, 2009Publication date: July 29, 2010Applicant: QUALCOMM IncorporatedInventors: Saishankar Nandagopalan, Etienne F. Chaponniere
-
Patent number: 7765343Abstract: Certain embodiments of the invention may be found in a method and system for handling data in port bypass controllers for storage systems and may comprise receiving a data stream from a receive port bypass controller's port and buffering at least a portion of the received data stream in at least one EFIFO buffer integrated within the port bypass controller. A data rate or frequency of the received data stream may be changed by inserting at least one extended fill word in the buffered portion of the received data stream or by deleting at least one fill word from the received data stream buffered in the EFIFO buffer. The extended fill word may comprise a loop initialization primitive (LIP), a loop port bypass (LPB), a loop port enable (LPE), a not operation state (NOS), an offline state (OLS), a link reset response (LRR) and/or a link reset (LR).Type: GrantFiled: February 13, 2004Date of Patent: July 27, 2010Assignee: Broadcom CorporationInventors: Chung-Jue Chen, Ali Ghiasi, Jay Proano, Rajesh Satapathy, Steve Thomas
-
Patent number: 7765335Abstract: A communication system complying with SPI-4 Phase 2 standard includes a local device, an opposing device, a first data channel to transfer payload data from the local to the opposing device, a second data channel opposed to the first data channel, and a first status channel to be able to transfer data from the local to the opposing device. The local device periodically outputs buffer status information of a data buffer for storing payload data received over the second data channel to the first status channel. Further, the local device inserts the buffer status information between the payload data according to a priority of the buffer status information in order to output the buffer status information to the first data channel. The opposing device controls to output payload data to the second data channel according to the buffer status information received over the first status channel and the first data channel.Type: GrantFiled: January 23, 2008Date of Patent: July 27, 2010Assignee: NEC Electronics CorporationInventor: Tomofumi Iima
-
Patent number: 7761620Abstract: A communications buffer and control unit that configure a USB connection endpoint are provided connected by a USB bus to a host device. The control unit changes the receive buffer size of a receive buffer where the communications buffer stores receive data, based on an instruction that is sent from the host device side through USB virtual serial communication, to enable the reception of real-time execution commands by the communications device. This enables the reception of real-time execution commands when the receive buffer on the communications device side is in a buffer-full state in data communications between a host device and a communications device.Type: GrantFiled: July 3, 2007Date of Patent: July 20, 2010Assignee: Citizen Holdings Co., Ltd.Inventor: Masaji Iwata
-
Publication number: 20100169519Abstract: In some embodiments a reconfigurable buffer manager manages an on-chip memory, and dynamically allocates and/or de-allocates portions of the on-chip memory to and/or from a plurality of functional on-chip blocks. Other embodiments are described and claimed.Type: ApplicationFiled: December 30, 2008Publication date: July 1, 2010Inventors: Yong Zhang, Michael J. Espig
-
Patent number: 7747823Abstract: Cache management strategies are described for retrieving information from a storage medium, such as an optical disc, using a cache memory including multiple cache segments. A first group of cache segments can be devoted to handling the streaming transfer of a first type of information, and a second group of cache segments can be devoted to handling the bulk transfer of a second type of information. A host system can provide hinting information that identifies which group of cache segments that a particular read request targets. A circular wrap-around fill strategy can be used to iteratively supply new information to the cache segments upon cache hits by performing pre-fetching. Various eviction algorithms can be used to select a cache segment for flushing and refilling upon a cache miss, such as a least recently used (LRU) algorithm or a least frequently used (LFU) algorithm.Type: GrantFiled: February 11, 2008Date of Patent: June 29, 2010Assignee: Microsoft CorporationInventors: Brian L. Schmidt, Jonathan E. Lange, Timothy R. Osborne
-
Publication number: 20100161855Abstract: A method and system for offloading I/O processing from a first computer to a second computer, using RDMA-capable network interconnects, are disclosed. The method and system include a client on the first computer communicating over an RDMA connection to a server on the second computer by way of a lightweight input/output (LWIO) protocol. The protocol generally comprises a network discovery phase followed by an I/O processing phase. During the discovery phase, the client and server determine a minimal list of shared RDMA-capable providers. During the I/O processing phase, the client posts I/O requests for offloading to the second machine over a mutually-authenticated RDMA channel. The I/O model is asymmetric, with read operations being implemented using RDMA and write operations being implemented using normal sends. Read and write requests may be completed in polling mode and in interrupt mode. Buffers are managed by way of a credit mechanism.Type: ApplicationFiled: March 4, 2010Publication date: June 24, 2010Applicant: Microsoft CorporationInventors: Ahmed H. Mohamed, Anthony F. Voellm
-
Patent number: 7743185Abstract: A method, system, and computer program product in a data processing system are disclosed for dynamically selecting software buffers for aggregation in order to optimize system performance. Data to be transferred to a device is received. The data is stored in a chain of software buffers. Current characteristics of the system are determined. Software buffers to be combined are then dynamically selected. This selection is made according to the characteristics of the system in order to maximize performance of the system.Type: GrantFiled: October 17, 2008Date of Patent: June 22, 2010Assignee: International Business Machines CorporationInventors: James R. Gallagher, Ron Encarnacion Gonzalez, Binh K. Hua, Sivarama K. Kodukula
-
Method and apparatus for synchronizing a software buffer index with an unknown hardware buffer index
Patent number: 7743182Abstract: Method and apparatus for synchronizing a software buffer index with an unknown hardware buffer index. Specifically, a method of processing data is disclosed comprising synchronizing a software buffer index to a hardware buffer index. The method sequentially searches through a plurality of buffers containing data to find a second buffer with unprocessed data. The method is implemented when the software buffer index points to a first buffer containing processed data. Thereafter, the software buffer index is reset to the next available buffer having processed data following the second buffer.Type: GrantFiled: February 6, 2002Date of Patent: June 22, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventor: Kenneth C. Duisenberg -
Patent number: 7739417Abstract: The present invention provides a virtual machine system and a method of accessing a graphics card. The virtual machine system includes a VMM, an SOS and at least one GOS, and further includes a resource converting module for performing IO address converting on graphics card framebuffer accessing data from GOS(s) or mapping MMIO(s) to physical MMIO(s) of a graphics card based on a resource converting table, and sending the processed data to the graphics card; and a framebuffer allocating module for dividing a framebuffer resource of the graphics card into multiple blocks and allocating them respectively to corresponding GOS(s). The resource converting table(s) records correspondences between a resource allocation for the graphics card by SOS and a resource allocation for the graphics card by GOS(s). The framebuffer MMIO resource(s) allocated to the graphics card by GOS(s) is/are the framebuffer allocated to GOS(s) by the framebuffer allocating module.Type: GrantFiled: February 4, 2008Date of Patent: June 15, 2010Assignees: Legend Holdings Ltd., Lenovo (Beijing) LimitedInventors: Yongfeng Liu, Chunmei Liu, Jun Chen, Ke Ke
-
Patent number: 7739426Abstract: A processing engine includes descriptor transfer logic that receives descriptors generated by a software controlled general purpose processing element. The descriptor transfer logic manages transactions that send the descriptors to resources for execution and receive responses back from the resources in response to the sent descriptors. The descriptor transfer logic can manage the allocation and operation of buffers and registers that initiate the transaction, track the status of the transaction, and receive the responses back from the resources all on behalf of the general purpose processing element.Type: GrantFiled: October 31, 2006Date of Patent: June 15, 2010Assignee: Cisco Technology, Inc.Inventors: Donald E. Steiss, Christopher E. White, Jonathan Rosen, John A. Fingerhut, Barry S. Burns
-
Patent number: 7739427Abstract: An apparatus and method for dynamically allocating memory between inbound and outbound paths of a networking protocol handler so as to optimize the ratio of a given amount of memory between the inbound and outbound buffers is presented. Dedicated but sharable buffer memory is provided for both the inbound and outbound processors of a computer network. Buffer memory is managed so as to dynamically alter what portion of memory is used to receive and store incoming data packets or to transmit outgoing data packets. Use of the present invention reduces throttling of data rate transmissions and other memory access bottlenecks associated with conventional fixed-memory network systems.Type: GrantFiled: July 31, 2008Date of Patent: June 15, 2010Assignee: International Business Machines CorporationInventors: Mark R. Bilak, Robert M. Bunce, Steven C. Parker, Brian J. Schuh
-
Patent number: 7735099Abstract: Method and system for a network for receiving and sending network packets is provided. The system includes a host processor that executes an operating system for a host system and at least one application that runs in a context that is different from a context of the operating system; and a network adapter with a hardware device that can run a network protocol stack, wherein the application can access the network adapter directly via an application specific interface layer without using the operating system and the application designates a named memory buffer for a network connection and when data is received by the network adapter for the network connection, then the network adapter passes the received data directly to the designated named buffer.Type: GrantFiled: December 23, 2005Date of Patent: June 8, 2010Assignee: QLOGIC, CorporationInventor: Charles Micalizzi, Jr.
-
Publication number: 20100138571Abstract: A system, method, and computer readable article of manufacture for sharing buffer management. The system includes: a predictor module to predict at runtime a transaction data size of a transaction according to history information of the transaction; and a resource management module to allocate sharing buffer resources for the transaction according to the predicted transaction data size in response to beginning of the transaction, to record an actual sharing buffer size occupied by the transaction in response to the successful commitment of the transaction, and to update the history information of the transaction.Type: ApplicationFiled: November 23, 2009Publication date: June 3, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Harold Wade Cain, III, Rui Hou, Xiaowei Shen, Huayong Wang
-
Patent number: 7730238Abstract: A method comprises providing a free buffer pool in a memory including a non-negative number of free buffers that are not allocated to a queue for buffering data. A request is received to add one of the free buffers to the queue. One of the free buffers is allocated to the queue in response to the request, if the queue has fewer than a first predetermined number of buffers associated with a session type of the queue. One of the free buffers is allocated to the queue, if a number of buffers in the queue is at least as large as the first predetermined number and less than a second predetermined number associated with the session type, and the number of free buffers is greater than zero.Type: GrantFiled: October 6, 2006Date of Patent: June 1, 2010Assignee: Agere System Inc.Inventors: Ambalavanar Arulambalam, Jian-Guo Chen, Nevin C. Heintze, Qian Gao Xu, Jun Chao Zhao
-
Patent number: 7720957Abstract: Apparatus and storage media for auto-configuration of an internal network interface are disclosed. Embodiments may install an internal VLAN manager in a logically partitioned computer system along with network agents in each of the partitions in the logically partitioned system to facilitate configuring an internal communications network and the corresponding internal network interfaces in each participating partition. In particular, an administrator accesses internal VLAN manager, selects an internal VLAN ID, selects each of the participating partitions, and configures the communications network with global parameters and ranges. The internal VLAN manager then generates partition parameters and incorporates them into messages for each of the partitions selected to participate in the internal network.Type: GrantFiled: February 11, 2009Date of Patent: May 18, 2010Assignee: International Business Machines CorporationInventors: Charles S. Graham, Harvey G. Kiel, Chetan Mehta, Lee A. Sendelbach, Jaya Srikrishnan
-
Patent number: 7721025Abstract: A session-level cache provides space to reuse a task object and associated memory resources for a new request. Any additional resources necessary for the task are allocated by the system memory allocator.Type: GrantFiled: September 6, 2006Date of Patent: May 18, 2010Assignee: RELDATA, Inc.Inventor: Dmitry Fomichev
-
Publication number: 20100121995Abstract: A method and system for handling received out-of-order network data using generic buffers for non-posting TCP applications is disclosed. When incoming out-of-order data is received and there is no application buffer posted, a TCP data placement may notify a TCP reassembler to terminate a current generic buffer, allocate a new current generic buffer, and DMA the incoming data into the new current generic buffer. The TCP data placement may notify the TCP reassembler the starting TCP sequence number and the length of the new current generic buffer. Moreover, the TCP data placement may add entries into a TCP out-of-order table when the incoming data creates a new disjoint area. The TCP data placement may adjust an existing disjoint area to reflect any updates. When a TCP application allocates or posts a buffer, then the TCP reassembler may copy data from a linked list of generic buffers into posted buffers.Type: ApplicationFiled: November 10, 2009Publication date: May 13, 2010Applicant: BROADCOM CORPORATIONInventors: Kan Frankie Fan, Scott McDaniel
-
Patent number: 7716396Abstract: A system for managing a circular buffer memory includes a number of data writers, a number of data readers, a circular buffer memory; and logic configured to form a number of counters, form a number of temporary variables from the counters, and allow the data writers and the data readers to simultaneously access locations in the circular buffer memory determined by the temporary variables.Type: GrantFiled: February 9, 2007Date of Patent: May 11, 2010Assignee: Juniper Networks, Inc.Inventors: Juqiang Liu, Hua Ji, Haisang Wu
-
Patent number: 7711873Abstract: A first processor that executes at least one application or process includes a first interface module that interfaces the first processor to a second processor and that includes N interfaces. N is an integer greater than 1. The first processor also includes a first communication control module (CCM) that selects M of the N interfaces based on bandwidth requested by the at least one application to transmit data generated by the at least one application to the second processor.Type: GrantFiled: January 10, 2008Date of Patent: May 4, 2010Assignee: Marvell International Ltd.Inventor: Ofer Zaarur
-
Patent number: 7710426Abstract: Buffers may be shared between components in a system. The components may be loosely coupled, allowing the components to be assembled into various different configurations, and yet buffers may still be shared. A buffer requirements negotiator of the system analyzes the buffer requirements of each of the components and determines, if possible, a set of requirements that satisfies all of the components. Accordingly, savings may be achieved in buffer memory, as well as in copying and converting between unshared buffers. Further, the individual components may operate as efficiently as possible because the buffer requirements of the components in the system are all met. One implementation accesses a first component's buffer requirements and a second component's buffer requirements, determines a reconciled set of buffer requirements that satisfies the buffer requirements of both components, and provides the reconciled set of buffer requirements to one or more components.Type: GrantFiled: April 25, 2005Date of Patent: May 4, 2010Assignee: Apple Inc.Inventor: John Samuel Bushell
-
Patent number: 7694043Abstract: A method is disclosed for exchanging data between a central processing unit (CPU) and an input/output processor (IOP). The CPU and IOP may both be senders or receivers depending on whether data is flowing to or from the CPU. Where data is flowing to the CPU, the CPU is the receiver and the IOP is the sender. Where data is flowing from the CPU, the CPU is the sender and the IOP is the receiver. A sender evaluates the amount of empty buffers and, in a preferred embodiment, whether there is more data coming to determine whether to release partially full buffers in its buffer pool. Partially full buffers may be released based on any threshold as desired from a simple integer to a complex algorithm. The evaluation of whether to release partially full buffers is preferably implemented where a sender obtains at least one data packet for sending to a receiver and where a sender obtains an empty buffer from a receiver.Type: GrantFiled: April 25, 2007Date of Patent: April 6, 2010Assignee: Unisys CorporationInventor: Craig F. Russ
-
Patent number: 7689741Abstract: A dual buffer memory system capable of improving system performance by reducing a data transmission time and a control method thereof are provided. The dual buffer memory system includes a flash memory block and a plurality of buffers. The dual buffer memory system uses a dual buffering scheme in which one buffer among the plurality of buffers interacts with the flash memory block and simultaneously a different buffer among the plurality of buffers interacts with a host. Therefore, it is possible to reduce a data transmission time between the flash memory and the host, thereby improving system performance.Type: GrantFiled: September 13, 2004Date of Patent: March 30, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Eun-Suk Kang, Jin-Yub Lee
-
Patent number: 7689739Abstract: An apparatus, spread spectrum receiver, and method of controlling a circular buffer, comprising a circular buffer and a controller coupled thereto. The circular buffer receives first data at a first data rate and second data at a second data rate. The controller determines a first range in the circular buffer based on the first data rate and a first time difference between the first write and first read speed, accesses the first data in the first range, estimates a second range in the circular buffer based on the second data rate and a second time difference between the first write and first read speed, and accesses the second data in the second range, where the second range is larger than and partially covered by the first range.Type: GrantFiled: July 10, 2006Date of Patent: March 30, 2010Assignee: Via Technologies, Inc.Inventors: Sung-Chiao Li, Johnson Sebeni, Eric Pan, Huoy Bing Lim
-
Patent number: 7689738Abstract: Methods and systems are provided for reducing partial cache writes in transferring incoming data status entries from a peripheral device to a host. The methods comprise determining a lower limit on a number of available incoming data status entry positions in an incoming data status ring in the host system memory, and selectively transferring a current incoming data status entry to the host system memory using a full cache line write if the lower limit is greater than or equal to a first value. Peripheral systems are provided for providing an interface between a host computer and an external device or network, which comprise a descriptor management system adapted to determine a lower limit on a number of available incoming data status entry positions in an incoming data status ring in a host system memory, and to selectively transfer a current incoming data status entry to the host system memory using a full cache line write if the lower limit is greater than or equal to a first value.Type: GrantFiled: October 1, 2003Date of Patent: March 30, 2010Assignee: Advanced Micro Devices, Inc.Inventors: Robert Alan Williams, Jeffrey Dwork
-
Patent number: RE41705Abstract: The present invention relates to a process and a device for handling the execution of a job in an open data processing system as a function of the resources. The process comprises the steps of: determining system resources available in virtual memory, real memory, temporary file space, and central processing unit utilization time during a given interval; computing the amount of resources preallocated to other requests and not yet used; comparing the amount of resources required for the execution of a job for which the request has been presented to the current amount of resources available minus the total amount of resources preallocated to other requests, in order to determine as a function of the result of this comparison the start, the deference or the denial of the start of the job requested The present invention relates to a process and a device for handling the execution of a job in an open data processing system as a function of the resources.Type: GrantFiled: May 4, 2004Date of Patent: September 14, 2010Assignee: Bull S.A.Inventors: Daniel Lucien Durand, Gerard Sitbon, Francois Urbain