Patents by Inventor Michael Fee

Michael Fee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8468421
    Abstract: A memory system is provided. The memory system includes a memory element that is configured to selectively output data stored to and data fetched from the memory element. An error checking station is configured to receive the data stored to and the data fetched from the memory element. The error checking station is further configured to perform error checking on the data.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: June 18, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 8407442
    Abstract: A computer-program product that includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: March 26, 2013
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Robert J. Sonnelitter, III
  • Patent number: 8392621
    Abstract: A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: March 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, Jr., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Patent number: 8327070
    Abstract: A computer implemented method of optimizing sequential data fetches in a computer system is provided. The method includes fetching a data segment from a main memory, the data segment having a plurality of target data entries; extracting a first portion of the data segment and storing the first portion into a target data cache, the first portion having a first target data entry; and storing the data segment into an intermediate cache line buffer in communication with the target data cache to enable subsequent fetches to a number of target data entries in the data segment.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 8327078
    Abstract: A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Publication number: 20120278548
    Abstract: Optimizing EDRAM refresh rates in a high performance cache architecture. An aspect of the invention includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Application
    Filed: July 11, 2012
    Publication date: November 1, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, JR., Scott B. Swaney
  • Publication number: 20120215995
    Abstract: A computer-implemented method that includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request.
    Type: Application
    Filed: April 30, 2012
    Publication date: August 23, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Robert J. Sonnelitter, III
  • Patent number: 8250243
    Abstract: A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Ekaterina M. Ambroladze, Michael Fee, Christine C. Jones
  • Publication number: 20120210188
    Abstract: Handling corrupted background data in an out of order processing environment. Modified data is stored on a byte of a word having at least one byte of background data. A byte valid vector and a byte store bit are added to the word. Parity checking is done on the word. If the word does not contain corrupted background date, the word is propagated to the next level of cache. If the word contains corrupted background data, a copy of the word is fetched from a next level of cache that is ECC protected, the byte having the modified data is extracted from the word and swapped for the corresponding byte in the word copy. The word copy is then written into the next level of cache that is ECC protected.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Fee, Christian Habermann, Christian Jacobi, Diana L. Orf, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Patent number: 8244972
    Abstract: Controlling refresh request transmission rates in a cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: August 14, 2012
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, Jr., Scott B. Swaney
  • Publication number: 20110320779
    Abstract: A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Christine C. Jones, Arthur J. O'Neill, JR., Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320701
    Abstract: Optimizing refresh request transmission rates in a high performance cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, JR., Scott B. Swaney
  • Publication number: 20110320721
    Abstract: A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20110320740
    Abstract: A computer implemented method of optimizing sequential data fetches in a computer system is provided. The method includes fetching a data segment from a main memory, the data segment having a plurality of target data entries; extracting a first portion of the data segment and storing the first portion into a target data cache, the first portion having a first target data entry; and storing the data segment into an intermediate cache line buffer in communication with the target data cache to enable subsequent fetches to a number target data entries in the data segment.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Michael Fee, Arthur J. O'Neill, JR.
  • Publication number: 20110320744
    Abstract: A computer-implemented method for collecting diagnostic data within a multiprocessor system that includes capturing diagnostic data via a plurality of collection points disposed at a source location within the multiprocessor system, routing the captured diagnostic data to a data collection station at the source location, providing a plurality of buffers within the data collection station, and temporarily storing the captured diagnostic data on at least one of the plurality of buffers, and transferring the captured diagnostic data to a target storage location on a same chip as the source location or another storage location on a same node.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Christine C. Jones
  • Publication number: 20110320696
    Abstract: A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Fee, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20110321053
    Abstract: A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Diana Lynn Orf
  • Publication number: 20110320863
    Abstract: Dynamic re-allocation of cache buffer slots includes moving data out of a reserved buffer slot upon detecting an error in the reserved buffer slot, creating a new buffer slot, and storing the data moved out of the reserved buffer slot in the new buffer slot.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Amroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, JR., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320716
    Abstract: A method of debugging a memory element is provided. The method includes initializing a line fetch controller with at least one of write data and read data; utilizing at least two separate clocks for performing at least one of write requests and read requests based on the at least one of the write data and the read data; and debugging the memory element based on the at least one of write requests and read requests.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Adam B. Collura, Michael Fee, Arthur J. O'Neill, JR., Gerard M. Salem, Robert J. Sonnelitter, III
  • Publication number: 20110320736
    Abstract: A computer-implemented method that includes receiving a plurality of stores in a store queue, via a processor, comparing a fetch request against the store queue to search for a target store having a same memory address as the fetch request, determining whether the target store is ahead of the fetch request in a same pipeline, and processing the fetch request when it is determined that the target store is ahead of the fetch request.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Robert J. Sonnelitter, III