Patents by Inventor Michael R. Trombley

Michael R. Trombley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20080240110
    Abstract: A memory subsystem includes Data Store 0 and Data Store 1. Each data store is partitioned into N buffers, N>1. An increment of memory is formed by a buffer pair, with each buffer of the buffer pair being in a different data store. Two buffer pair formats are used in forming memory increments. A first format selects a first buffer from Data Store 0 and a second buffer from Data Store 1, while a second format selects a first buffer from Data Store 1 and a second buffer from Data Store 0. A controller selects a buffer pair for storing data based upon the configuration of data in a delivery mechanism, such as switch cell.
    Type: Application
    Filed: July 11, 2007
    Publication date: October 2, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian M. Bass, Gordon T. Davis, Michael S. Siegel, Michael R. Trombley
  • Publication number: 20080215832
    Abstract: A method and system for scheduling the servicing of data requests, using the variable latency mode, in an FBDIMM memory sub-system. A scheduling algorithm pre-computes return time data for data connected to all DRAM buffer chips and stores the return time data in a table. The return time data is expressed as a set of data return time binary vectors with one bit equal to “1” in each vector. For each received data request, the memory controller retrieves the appropriate return time vector. Additionally, the scheduling algorithm utilizes an updated history vector representing a compilation of data return time vectors of all executing requests to determine whether the received request presents a conflict to the executing requests. By computing and utilizing a score for each request, the scheduling algorithm re-orders and schedules the execution of selected requests to preserve as much data bus bandwidth as possible, while avoiding conflict.
    Type: Application
    Filed: March 1, 2007
    Publication date: September 4, 2008
    Inventors: James J. Allen, Steven K. Jenkins, Michael R. Trombley
  • Publication number: 20080215783
    Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for scheduling the servicing of data requests, using the variable latency mode, in an FBDIMM memory sub-system is provided. A scheduling algorithm pre-computes return time data for data connected to DRAM buffer chips and stores the return time data in a table. The return time data is expressed as data return time binary vectors with one bit equal to “1” in each vector. For each received data request, the memory controller retrieves the appropriate return time vector. Additionally, the scheduling algorithm utilizes an updated history vector to determine whether the received request presents a conflict to the executing requests. By computing and utilizing a score for each request, the scheduling algorithm re-orders and schedules the execution of selected requests to preserve as much data bus bandwidth as possible, while avoiding conflict.
    Type: Application
    Filed: April 28, 2008
    Publication date: September 4, 2008
    Inventors: James J. Allen, Steven K. Jenkins, Michael R. Trombley
  • Publication number: 20080209095
    Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure generally includes a processor memory system, which may include a processor and a memory controller in communication with the processor through a bus. The memory controller may include a delay circuit to receive an early read indicator corresponding to read data from a memory, the delay circuit to delay the early read indicator in accordance with a pre-determined delay such that the early read indicator is passed to the bus in advance of the read data, and a delay adjustment circuit to dynamically adjust the pre-determined delay associated with the delay circuit responsive to a change in operational speed of the processor or the bus.
    Type: Application
    Filed: May 4, 2008
    Publication date: August 28, 2008
    Inventors: JAMES J. ALLEN, Steven K. Jenkins, James A. Mossman, Michael R. Trombley
  • Publication number: 20080168293
    Abstract: Methods and system for reducing latency associated with a read operation in a processor memory system are provided. In one implementation, the method includes receiving an early indicator corresponding to read data from a memory, delaying the early indicator in accordance with a pre-determined delay such that the early read indicator is passed to a bus in advance of the read data; and dynamically adjusting the pre-determined delay using an adjustment delay circuit, the pre-determined delay being adjusted responsive to a change in operational speed of the bus or change in operational speed of a processor coupled to the bus.
    Type: Application
    Filed: January 9, 2007
    Publication date: July 10, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James J. ALLEN, Steven K. JENKINS, James A. MOSSMAN, Michael R. TROMBLEY
  • Publication number: 20080168329
    Abstract: A two-level error control protocol detects errors on the subline level and corrects errors using the codeword for the entire line. This enables a system to read small pieces of coded data and check for errors before accepting them, and in case errors are detected, the whole codeword is read for error correction.
    Type: Application
    Filed: January 4, 2007
    Publication date: July 10, 2008
    Inventors: Junsheng Han, Luis A. Lastras-Montano, Michael R. Trombley
  • Patent number: 7286543
    Abstract: A memory subsystem includes Data Store 0 and Data Store 1. Each data store is partitioned into N buffers, N>1. An increment of memory is formed by a buffer pair, with each buffer of the buffer pair being in a different data store. Two buffer pair formats are used in forming memory increments. A first format selects a first buffer from Data Store 0 and a second buffer from Data Store 1, while a second format selects a first buffer from Data Store 1 and a second buffer from Data Store 0. A controller selects a buffer pair for storing data based upon the configuration of data in a delivery mechanism, such as switch cell.
    Type: Grant
    Filed: February 20, 2003
    Date of Patent: October 23, 2007
    Assignee: International Business Machines Corporation
    Inventors: Brian M. Bass, Gordon T. Davis, Michael S. Siegel, Michael R. Trombley
  • Publication number: 20030161315
    Abstract: A memory subsystem includes Data Store 0 and Data Store 1. Each data store is partitioned into N buffers, N>1. An increment of memory is formed by a buffer pair, with each buffer of the buffer pair being in a different data store. Two buffer pair formats are used in forming memory increments. A first format selects a first buffer from Data Store 0 and a second buffer from Data Store 1, while a second format selects a first buffer from Data Store 1 and a second buffer from Data Store 0. A controller selects a buffer pair for storing data based upon the configuration of data in a delivery mechanism, such as switch cell.
    Type: Application
    Filed: February 20, 2003
    Publication date: August 28, 2003
    Applicant: International Business Machines Corporation
    Inventors: Brian M. Bass, Gordon T. Davis, Michael S. Siegel, Michael R. Trombley
  • Patent number: 5781763
    Abstract: A mixed-endian computer system enhanced to manage I/O DMA without a software DMA performance penalty. A mixed-endian computer system can change endian mode on a task by task basis if necessary. The mixed-endian system, as enhanced, performs one of two well-defined DMA operations based on control bits either in the DMA control register or in a bit vector associated with each page of processor storage. This invention also describes means for treating I/O registers as if they were of the endian of the executing processor, instead of the more typical need to have the register operate in a particular endian.
    Type: Grant
    Filed: May 22, 1997
    Date of Patent: July 14, 1998
    Assignee: International Business Machines Corporation
    Inventors: Bruce Leroy Beukema, Gary Scott Delp, Larry Wayne Loen, Daniel Frank Moertl, Michael R. Trombley
  • Patent number: 5471626
    Abstract: An instruction pipeline includes a sequence of interconnected pipeline stages, each stage dedicated to one of several operations executed on data in a digital processing device. Control words govern execution of the operations as they progress through the pipeline. The pipeline stages, as well as the pipeline entry and exit, are interconnected in a manner that permits each control word to enter and exit the pipeline at any one of the stages, and to skip any stages in which the control word will not govern any operations on data. On occasion, this permits a control word to bypass another control word which originally preceded it in the pipeline, thus to reverse the order of the two control words. A mapping field in each control word predetermines its route through the instruction pipeline, one bit of the map field corresponding to each pipeline stage.
    Type: Grant
    Filed: May 6, 1992
    Date of Patent: November 28, 1995
    Assignee: International Business Machines Corporation
    Inventors: Michael J. Carnevale, Ronald N. Kalla, Gary P. McClannahan, Michael R. Trombley