Patents by Inventor Lucien Mirabeau
Lucien Mirabeau has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7844752Abstract: A method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions. DMA I/O operations and performance are improved by reducing the overhead in DMA chaining events by creating a software DMA queue when a hardware DMA queue overflows and dynamically linking new DMA requests to the software queue until a hardware queue becomes available at which time the software queue is put on the hardware queue. Thus, microcode does not need to manage the hardware queues and keep the DMA engine running continuously because it no longer has to wait for microcode to reset the DMA chain completion indicator.Type: GrantFiled: November 30, 2005Date of Patent: November 30, 2010Assignee: International Business Machines CorporationInventors: Lucien Mirabeau, Tiep Quoc Pham
-
Patent number: 7650467Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.Type: GrantFiled: March 20, 2008Date of Patent: January 19, 2010Assignee: International Business Machines CorporationInventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
-
Patent number: 7469305Abstract: In response to multiple data transfer requests from an application, a data definition (DD) chain is generated. The DD chain is divided into multiple DD sub-blocks by determining a bandwidth of channels (BOC) and whether the BOC is less than the DD chain. If so, the DD chain is divided by the available DMA engines. If not, the DD chain is divided by an optimum atomic transfer unit (OATU). If the division yields a remainder, the remainder is added to a last DD sub-block. If the remainder is less than a predetermined value, the size of the last DD sub-block is set to the OATU plus the remainder. Otherwise, the size of the last DD sub-block is set to the remainder. The DD sub-blocks are subsequently loaded into a set of available DMA engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.Type: GrantFiled: September 20, 2006Date of Patent: December 23, 2008Assignee: International Business Machines CorporationInventors: Lucien Mirabeau, Tiep Q. Pham
-
Patent number: 7418557Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.Type: GrantFiled: November 30, 2004Date of Patent: August 26, 2008Assignee: International Business Machines CorporationInventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
-
Publication number: 20080168238Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.Type: ApplicationFiled: March 20, 2008Publication date: July 10, 2008Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
-
Publication number: 20080126605Abstract: A method for handling multiple data transfer requests from one application within a computer system is disclosed. In response to the receipt of multiple data transfer requests from an application, a data definition (DD) chain is generated for all the data transfer requests. The DD chain is then divided into multiple DD sub-blocks. The DD sub-blocks are subsequently loaded into a set of available direct memory access (DMA) engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.Type: ApplicationFiled: September 20, 2006Publication date: May 29, 2008Inventors: Lucien Mirabeau, Tiep Q. Pham
-
Patent number: 7337367Abstract: An error handling method is provided for processing adapter errors. Rather than executing a disruptive controller hardware reset, an error handling routine provides instructions for a reset operation to be loaded and executed from cache while the SDRAM is in self-refresh mode and therefore unusable.Type: GrantFiled: January 6, 2005Date of Patent: February 26, 2008Assignee: International Business Machines CorporationInventors: Lucien Mirabeau, Charles S Cardinell, Man Wah Ma, Ricardo S Padilla
-
Publication number: 20070162637Abstract: A method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions. DMA I/O operations and performance are improved by reducing the overhead in DMA chaining events by creating a software DMA queue when a hardware DMA queue overflows and dynamically linking new DMA requests to the software queue until a hardware queue becomes available at which time the software queue is put on the hardware queue. Thus, microcode does not need to manage the hardware queues and keep the DMA engine running continuously because it no longer has to wait for microcode to reset the DMA chain completion indicator.Type: ApplicationFiled: November 30, 2005Publication date: July 12, 2007Inventors: Lucien Mirabeau, Tiep Pham
-
Publication number: 20070118664Abstract: A mail dispatch system for a computer subsystem has at least one data processing server. An interface device is configurable to electrically couple to the server. The device has an input/output (I/O) port for sending and receiving information A mail dispatch module is adapted to be operable on the interface device. The module includes a mail dispatch algorithm adapted to identify a first number of data processing servers, identify a second number of I/O ports and calculate an optimal mail dispatch unit based on the first and the second numbers.Type: ApplicationFiled: October 24, 2005Publication date: May 24, 2007Applicant: International Business Machines CorporationInventors: Lucien Mirabeau, Tiep Pham
-
Publication number: 20060150030Abstract: An error handling method is provided for processing adapter errors. Rather than executing a disruptive controller hardware reset, an error handling routine provides instructions for a reset operation to be loaded and executed from cache while the SDRAM is in self-refresh mode and therefore unusable.Type: ApplicationFiled: January 6, 2005Publication date: July 6, 2006Applicant: International Business Machines (IBM) CorporationInventors: Lucien Mirabeau, Charles Cardinell, Man Ma, Ricardo Padilla
-
Publication number: 20060117147Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.Type: ApplicationFiled: November 30, 2004Publication date: June 1, 2006Inventors: Stephen Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Rankin, Cheng-Chung Song