Patents by Inventor Lucien Mirabeau

Lucien Mirabeau has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7844752
    Abstract: A method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions. DMA I/O operations and performance are improved by reducing the overhead in DMA chaining events by creating a software DMA queue when a hardware DMA queue overflows and dynamically linking new DMA requests to the software queue until a hardware queue becomes available at which time the software queue is put on the hardware queue. Thus, microcode does not need to manage the hardware queues and keep the DMA engine running continuously because it no longer has to wait for microcode to reset the DMA chain completion indicator.
    Type: Grant
    Filed: November 30, 2005
    Date of Patent: November 30, 2010
    Assignee: International Business Machines Corporation
    Inventors: Lucien Mirabeau, Tiep Quoc Pham
  • Patent number: 7650467
    Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 20, 2008
    Date of Patent: January 19, 2010
    Assignee: International Business Machines Corporation
    Inventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
  • Patent number: 7469305
    Abstract: In response to multiple data transfer requests from an application, a data definition (DD) chain is generated. The DD chain is divided into multiple DD sub-blocks by determining a bandwidth of channels (BOC) and whether the BOC is less than the DD chain. If so, the DD chain is divided by the available DMA engines. If not, the DD chain is divided by an optimum atomic transfer unit (OATU). If the division yields a remainder, the remainder is added to a last DD sub-block. If the remainder is less than a predetermined value, the size of the last DD sub-block is set to the OATU plus the remainder. Otherwise, the size of the last DD sub-block is set to the remainder. The DD sub-blocks are subsequently loaded into a set of available DMA engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.
    Type: Grant
    Filed: September 20, 2006
    Date of Patent: December 23, 2008
    Assignee: International Business Machines Corporation
    Inventors: Lucien Mirabeau, Tiep Q. Pham
  • Patent number: 7418557
    Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 30, 2004
    Date of Patent: August 26, 2008
    Assignee: International Business Machines Corporation
    Inventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
  • Publication number: 20080168238
    Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    Type: Application
    Filed: March 20, 2008
    Publication date: July 10, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Stephen LaRoux Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Dean Rankin, Cheng-Chung Song
  • Publication number: 20080126605
    Abstract: A method for handling multiple data transfer requests from one application within a computer system is disclosed. In response to the receipt of multiple data transfer requests from an application, a data definition (DD) chain is generated for all the data transfer requests. The DD chain is then divided into multiple DD sub-blocks. The DD sub-blocks are subsequently loaded into a set of available direct memory access (DMA) engines. Each of the available DMA engines performs data transfers on a corresponding DD sub-block until the entire DD chain has been completed.
    Type: Application
    Filed: September 20, 2006
    Publication date: May 29, 2008
    Inventors: Lucien Mirabeau, Tiep Q. Pham
  • Patent number: 7337367
    Abstract: An error handling method is provided for processing adapter errors. Rather than executing a disruptive controller hardware reset, an error handling routine provides instructions for a reset operation to be loaded and executed from cache while the SDRAM is in self-refresh mode and therefore unusable.
    Type: Grant
    Filed: January 6, 2005
    Date of Patent: February 26, 2008
    Assignee: International Business Machines Corporation
    Inventors: Lucien Mirabeau, Charles S Cardinell, Man Wah Ma, Ricardo S Padilla
  • Publication number: 20070162637
    Abstract: A method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions. DMA I/O operations and performance are improved by reducing the overhead in DMA chaining events by creating a software DMA queue when a hardware DMA queue overflows and dynamically linking new DMA requests to the software queue until a hardware queue becomes available at which time the software queue is put on the hardware queue. Thus, microcode does not need to manage the hardware queues and keep the DMA engine running continuously because it no longer has to wait for microcode to reset the DMA chain completion indicator.
    Type: Application
    Filed: November 30, 2005
    Publication date: July 12, 2007
    Inventors: Lucien Mirabeau, Tiep Pham
  • Publication number: 20070118664
    Abstract: A mail dispatch system for a computer subsystem has at least one data processing server. An interface device is configurable to electrically couple to the server. The device has an input/output (I/O) port for sending and receiving information A mail dispatch module is adapted to be operable on the interface device. The module includes a mail dispatch algorithm adapted to identify a first number of data processing servers, identify a second number of I/O ports and calculate an optimal mail dispatch unit based on the first and the second numbers.
    Type: Application
    Filed: October 24, 2005
    Publication date: May 24, 2007
    Applicant: International Business Machines Corporation
    Inventors: Lucien Mirabeau, Tiep Pham
  • Publication number: 20060150030
    Abstract: An error handling method is provided for processing adapter errors. Rather than executing a disruptive controller hardware reset, an error handling routine provides instructions for a reset operation to be loaded and executed from cache while the SDRAM is in self-refresh mode and therefore unusable.
    Type: Application
    Filed: January 6, 2005
    Publication date: July 6, 2006
    Applicant: International Business Machines (IBM) Corporation
    Inventors: Lucien Mirabeau, Charles Cardinell, Man Ma, Ricardo Padilla
  • Publication number: 20060117147
    Abstract: In managing multiprocessor operations, a first processor repetitively reads a cache line wherein the cache line is cached from a line of a shared memory of resources shared by both the first processor and a second processor. Coherency is maintained between the shared memory line and the cache line in accordance with a cache coherency protocol. In one aspect, the repetitive cache line reading occupies the first processor and inhibits the first processor from accessing the shared resources. In another aspect, upon completion of operations by the second processor involving the shared resources, the second processor writes data to the shared memory line to signal to the first processor that the shared resources may be accessed by the first processor. In response, the first processor changes the state of the cache line in accordance with the cache coherency protocol and reads the data written by the second processor. Other embodiments are described and claimed.
    Type: Application
    Filed: November 30, 2004
    Publication date: June 1, 2006
    Inventors: Stephen Blinick, Yu-Cheng Hsu, Lucien Mirabeau, Ricky Rankin, Cheng-Chung Song